Am Mittwoch, den 26.06.2013, 11:18 +0200 schrieb Heiko Stübner:
> Hi Philipp,
> 
> Am Dienstag, 25. Juni 2013, 12:17:05 schrieb Philipp Zabel:
> > Hi Heiko,
> > 
> > Am Dienstag, den 25.06.2013, 10:47 +0200 schrieb Heiko Stübner:
> > > Some SoCs need parts of their sram for special purposes. So while being
> > > part of the periphal, it should not be part of the genpool controlling
> > > the sram.
> > > 
> > > Threfore add an option mmio-sram-reserved to keep arbitary portions of
> > > the sram from being part of the pool.
> > > 
> > > Suggested-by: Rob Herring <[email protected]>
> > > Signed-off-by: Heiko Stuebner <[email protected]>
> > > ---
> > > 
> > >  Documentation/devicetree/bindings/misc/sram.txt |    8 +++
> > >  drivers/misc/sram.c                             |   86
> > >  +++++++++++++++++++++-- 2 files changed, 88 insertions(+), 6
> > >  deletions(-)
> > > 
> > > diff --git a/Documentation/devicetree/bindings/misc/sram.txt
> > > b/Documentation/devicetree/bindings/misc/sram.txt index 4d0a00e..eae080e
> > > 100644
> > > --- a/Documentation/devicetree/bindings/misc/sram.txt
> > > +++ b/Documentation/devicetree/bindings/misc/sram.txt
> > > 
> > > @@ -8,9 +8,17 @@ Required properties:
> > >  - reg : SRAM iomem address range
> > > 
> > > +Optional properties:
> > > +
> > > +- mmio-sram-reserved: ordered list of reserved chunks inside the sram
> > > that +  should not become part of the genalloc pool.
> > > +  Format is <base size>, <base size>, ...; with base being relative to
> > > the +  reg property base.
> > > +
> > 
> > the keyword to reserve blocks of ram is /memreserve/ - should this
> > property name be aligned with that?
> 
> The mmio-sram-reserved name was suggested by Rob Herring, who I suppose has 
> some slight experience with devicetree :-) .

That did sound like a suggestion from the top of his head, though.
I just wanted to point out the similarity in function, in case it is
relevant.

> I wasn't able to find real documentation on /memreserve/ but it looks more 
> like it's used to reserve generic memregions, not being node-specific.
> So reusing this might also cause confusion when the reserve-data now is 
> relative to it's node reg.

True.

> > >  Example:
> > >  
> > >  sram: sram@5c000000 {
> > >  
> > >   compatible = "mmio-sram";
> > >   reg = <0x5c000000 0x40000>; /* 256 KiB SRAM at address 0x5c000000 */
> > > 
> > > + mmio-sram-reserved = <0x0 0x100>; /* reserve 0x5c000000-0x5c000100 */
> > > 
> > >  };
> > > 
> > > diff --git a/drivers/misc/sram.c b/drivers/misc/sram.c
> > > index afe66571..5fccbe3 100644
> > > --- a/drivers/misc/sram.c
> > > +++ b/drivers/misc/sram.c
> > > @@ -42,6 +42,8 @@ static int sram_probe(struct platform_device *pdev)
> > > 
> > >   struct sram_dev *sram;
> > >   struct resource *res;
> > >   unsigned long size;
> > > 
> > > + const __be32 *reserved_list = NULL;
> > > + int reserved_size = 0;
> > > 
> > >   int ret;
> > >   
> > >   res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> > > 
> > > @@ -65,12 +67,89 @@ static int sram_probe(struct platform_device *pdev)
> > > 
> > >   if (!sram->pool)
> > >   
> > >           return -ENOMEM;
> > > 
> > > - ret = gen_pool_add_virt(sram->pool, (unsigned long)virt_base,
> > > -                         res->start, size, -1);
> > > - if (ret < 0) {
> > > -         if (sram->clk)
> > > -                 clk_disable_unprepare(sram->clk);
> > > -         return ret;
> > > + if (pdev->dev.of_node) {
> > > +         reserved_list = of_get_property(pdev->dev.of_node,
> > > +                                         "mmio-sram-reserved",
> > > +                                         &reserved_size);
> > > +         if (reserved_list) {
> > > +                 reserved_size /= sizeof(*reserved_list);
> > > +                 if (!reserved_size || reserved_size % 2) {
> > > +                         dev_warn(&pdev->dev, "wrong number of arguments 
> > > in
> > > mmio-sram-reserved\n"); +                         reserved_list = NULL;
> > > +                 }
> > > +         }
> > > + }
> > > +
> > > + if (!reserved_list) {
> > > +         ret = gen_pool_add_virt(sram->pool, (unsigned long)virt_base,
> > > +                                 res->start, size, -1);
> > > +         if (ret < 0) {
> > > +                 if (sram->clk)
> > > +                         clk_disable_unprepare(sram->clk);
> > > +                 return ret;
> > > +         }
> > 
> > Moving the clk_prepare_enable() further down would allow to avoid the
> > clk_disable_unprepare() in every error path,
> > 
> > > + } else {
> > > +         unsigned int cur_start = 0;
> > > +         unsigned int cur_size;
> > > +         unsigned int rstart;
> > > +         unsigned int rsize;
> > > +         int i;
> > > +
> > > +         for (i = 0; i < reserved_size; i += 2) {
> > > +                 /* get the next reserved block */
> > > +                 rstart = be32_to_cpu(*reserved_list++);
> > > +                 rsize = be32_to_cpu(*reserved_list++);
> > > +
> > > +                 /* catch unsorted list entries */
> > > +                 if (rstart < cur_start) {
> > > +                         dev_err(&pdev->dev, "unsorted reserved list 
> > > (0x%x before 
> current
> > > 0x%x)\n", +                                       rstart, cur_start);
> > > +                         if (sram->clk)
> > > +                                 clk_disable_unprepare(sram->clk);
> > 
> > like here
> > 
> > > +                         return -EINVAL;
> > > +                 }
> > > +
> > > +                 dev_dbg(&pdev->dev, "found reserved block 0x%x-0x%x\n",
> > > +                          rstart, rstart + rsize);
> > > +
> > > +                 /* current start is in a reserved block */
> > > +                 if (rstart <= cur_start) {
> > > +                         cur_start = rstart + rsize;
> > > +                         continue;
> > > +                 }
> > > +
> > > +                 /*
> > > +                  * allocate the space between the current starting
> > > +                  * address and the following reserved block
> > > +                  */
> > > +                 cur_size = rstart - cur_start;
> > > +
> > > +                 dev_dbg(&pdev->dev, "adding chunk 0x%x-0x%x\n",
> > > +                          cur_start, cur_start + cur_size);
> > > +                 ret = gen_pool_add_virt(sram->pool,
> > > +                                 (unsigned long)virt_base + cur_start,
> > > +                                 res->start + cur_start, cur_size, -1);
> > > +                 if (ret < 0) {
> > > +                         if (sram->clk)
> > > +                                 clk_disable_unprepare(sram->clk);
> > 
> > and here.
> > 
> > > +                         return ret;
> > > +                 }
> > > +
> > > +                 /* next allocation after this reserved block */
> > > +                 cur_start = rstart + rsize;
> > > +         }
> > > +
> > > +         /* allocate the space after the last reserved block */
> > > +         if (cur_start < size) {
> > > +                 cur_size = size - cur_start;
> > > +
> > > +                 dev_dbg(&pdev->dev, "adding chunk 0x%x-0x%x\n",
> > > +                          cur_start, cur_start + cur_size);
> > > +                 ret = gen_pool_add_virt(sram->pool,
> > > +                                 (unsigned long)virt_base + cur_start,
> > > +                                 res->start + cur_start, cur_size, -1);
> > > +         }
> > > +
> > > 
> > >   }
> > >   
> > >   platform_set_drvdata(pdev, sram);
> > 
> > Also, I think you could reduce the duplication of gen_pool_add_virt()
> > function calls, somehow like this:
> > 
> >     unsigned int cur_start = 0;
> >     unsigned int cur_size;
> >     unsigned int rstart;
> >     unsigned int rsize;
> >     int i = 0;
> > 
> >     if (!reserved_list)
> >             reserved_size = 0;
> > 
> >     for (i = 0; i < (reserved_size + 2); i += 2) {
> >             if (i < reserved_size) {
> >                     /* get the next reserved block */
> >                     rstart = be32_to_cpu(*reserved_list++);
> >                     rsize = be32_to_cpu(*reserved_list++);
> > 
> >                     /* catch unsorted list entries */
> >                     if (rstart < cur_start) {
> >                             dev_err(&pdev->dev,
> >                                     "unsorted reserved list (0x%x before 
> > current 0x%x)\n",
> >                                     rstart, cur_start);
> >                             return -EINVAL;
> >                     }
> > 
> >                     dev_dbg(&pdev->dev,
> >                             "found reserved block 0x%x-0x%x\n",
> >                             rstart, rstart + rsize);
> >             } else {
> >                     /* the last chunk extends to the end of the region */
> >                     rstart = size;
> >             }
> > 
> >             /* current start is in a reserved block */
> >             if (rstart <= cur_start) {
> >                     cur_start = rstart + rsize;
> >                     continue;
> >             }
> > 
> >             /*
> >              * allocate the space between the current starting
> >              * address and the following reserved block, or the
> >              * end of the region.
> >              */
> >             cur_size = rstart - cur_start;
> > 
> >             dev_dbg(&pdev->dev, "adding chunk 0x%x-0x%x\n",
> >                     cur_start, cur_start + cur_size);
> >             ret = gen_pool_add_virt(sram->pool,
> >                             (unsigned long)virt_base + cur_start,
> >                             res->start + cur_start, cur_size, -1);
> >             if (ret < 0)
> >                     return ret;
> >     }
> 
> yep, this looks nicer - same for moving the clk_prepare_enable to below this 
> loop to unclutter the error-path.
> 
> So I will incorporate this in v3.

Excellent.

thanks
Philipp


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to