for your images 12

2018-10-25 Thread Kate

We are an imaging team who can process 300+ images daily.

If you need any image editing service, please contact us today.

We do mainly images cut out and clipping path, masking.
Such as for your ecommerce photos, jewelry photos retouching, also it is
for beauty portraits and skin images
and wedding photos.

We provide test editing if you send some photos.

Thanks,
Kate



for your images 15

2018-10-25 Thread Kate

We are an imaging team who can process 300+ images daily.

If you need any image editing service, please contact us today.

We do mainly images cut out and clipping path, masking.
Such as for your ecommerce photos, jewelry photos retouching, also it is
for beauty portraits and skin images
and wedding photos.

We provide test editing if you send some photos.

Thanks,
Kate



for your images 15

2018-10-25 Thread Kate

We are an imaging team who can process 300+ images daily.

If you need any image editing service, please contact us today.

We do mainly images cut out and clipping path, masking.
Such as for your ecommerce photos, jewelry photos retouching, also it is
for beauty portraits and skin images
and wedding photos.

We provide test editing if you send some photos.

Thanks,
Kate



for your images 14

2018-10-25 Thread Kate

We are an imaging team who can process 300+ images daily.

If you need any image editing service, please contact us today.

We do mainly images cut out and clipping path, masking.
Such as for your ecommerce photos, jewelry photos retouching, also it is
for beauty portraits and skin images
and wedding photos.

We provide test editing if you send some photos.

Thanks,
Kate



for your images 14

2018-10-25 Thread Kate

We are an imaging team who can process 300+ images daily.

If you need any image editing service, please contact us today.

We do mainly images cut out and clipping path, masking.
Such as for your ecommerce photos, jewelry photos retouching, also it is
for beauty portraits and skin images
and wedding photos.

We provide test editing if you send some photos.

Thanks,
Kate



for your images 16

2018-10-25 Thread Kate

We are an imaging team who can process 300+ images daily.

If you need any image editing service, please contact us today.

We do mainly images cut out and clipping path, masking.
Such as for your ecommerce photos, jewelry photos retouching, also it is
for beauty portraits and skin images
and wedding photos.

We provide test editing if you send some photos.

Thanks,
Kate



Images 34

2018-10-18 Thread Nancy

We are an image team who can process 400+ images each day.

If you need any image editing service, please let us know.

Image cut out and clipping path, masking.
Such as for ecommerce photos, jewelry photos retouching, beauty and skin
images
and wedding photos.

We give test editing for your photos if you send us some.

Thanks,
Nancy



Your images 22

2018-10-04 Thread Joanna

Have photos for cutting out or retouching?

We are one image team and we do editing for your the e-commerce photos,
industry photos or portrait photo.

If you need test editing then let me know.
Waiting for your reply and the photo work.

Thanks,
Joanna



Your images 22

2018-10-04 Thread Joanna

Have photos for cutting out or retouching?

We are one image team and we do editing for your the e-commerce photos,
industry photos or portrait photo.

If you need test editing then let me know.
Waiting for your reply and the photo work.

Thanks,
Joanna



Your images 24

2018-10-04 Thread Joanna

Have photos for cutting out or retouching?

We are one image team and we do editing for your the e-commerce photos,
industry photos or portrait photo.

If you need test editing then let me know.
Waiting for your reply and the photo work.

Thanks,
Joanna



Give retouching for your images

2018-09-03 Thread Jason Jones

Hi,

I would like to take the chance to speak with the person who handle your
images in your company?

Our ability of imaging editing.
Cutting out for your images
Clipping path for your images
Masking for your images
Retouching for your images
Retouching for the Portraits images

We have 17 photo editors in house and daily basis 700 images can be done.

We can give testing for your photos if you send us one or two to start
working.

Thanks,
Jason Jones



Give retouching for your images

2018-09-03 Thread Jason Jones

Hi,

I would like to take the chance to speak with the person who handle your
images in your company?

Our ability of imaging editing.
Cutting out for your images
Clipping path for your images
Masking for your images
Retouching for your images
Retouching for the Portraits images

We have 17 photo editors in house and daily basis 700 images can be done.

We can give testing for your photos if you send us one or two to start
working.

Thanks,
Jason Jones



Images processing

2018-08-31 Thread Jimmy Wilson

Hi,

Have you received my email from last week?

I would like to speak with the person who manage your photos for your
company?

We are here to provide you all kinds of imaging editing.

What we can provide you:
Cutting out for photos
Clipping path for photos
Masking for photos
Retouching for your photos
Retouching for the Beauty Model and Portraits.

We have 20 staffs in house and daily basis 1000 images can be processed.

We give testing for your photos.

Thanks,
Jimmy Wilson



Enhancing your images

2018-08-03 Thread Sam

Do you have photos for editing?
We are image team and we can process 500+ images each day.

We edit ecommerce photo, jewelry photos, and beauty model photos.
also wedding photos.

We do cut out and clipping path for photos, also retouching.

You may send us a test photo to check our quality.

Thanks,
Sam Parker



images work for you

2018-07-17 Thread Ruby

We have got a team of professional to do image editing service for you.

We have 20 image editors and on daily basis 2000 images can be processed.

If you want to check our quality of work please send us a photo with
instruction and we will work on it.

Our Services:
Photo cut out, masking, clipping path
Color, brightness and contrast correction
Beauty, Model retouching, skin retouching
Image cropping and resizing
Correcting the shape and size

We do unlimited revisions until you are satisfied with the work.

Thanks,
Ruby Young



images work for you

2018-07-17 Thread Ruby

We have got a team of professional to do image editing service for you.

We have 20 image editors and on daily basis 2000 images can be processed.

If you want to check our quality of work please send us a photo with
instruction and we will work on it.

Our Services:
Photo cut out, masking, clipping path
Color, brightness and contrast correction
Beauty, Model retouching, skin retouching
Image cropping and resizing
Correcting the shape and size

We do unlimited revisions until you are satisfied with the work.

Thanks,
Ruby Young



images

2018-07-16 Thread Simon Dike

We can process 400+ images per day.
If you need any image editing, please let us know.

Photos cut out;
Photos clipping path;
Photos masking;
Photo shadow creation;
Photos retouching;
Beauty Model retouching on skin, face, body;
Glamour retouching;
Products retouching.

We can give you testing for your photos.

Turnaround time is fast

Thanks,
Simon



Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-21 Thread Mauro Carvalho Chehab
Em Mon, 21 Nov 2016 16:44:28 +0100
Johannes Berg <johan...@sipsolutions.net> escreveu:

> > > > You had pointed me to this plugin before
> > > > https://pythonhosted.org/sphinxcontrib-aafig/
> > > > 
> > > > but I don't think it can actually represent any of the pictures.  
> > > 
> > > No, but there are some ascii art images inside some txt/rst files
> > > and inside some kernel-doc comments. We could either use the above
> > > extension for them or to convert into some image. The ascii art
> > > images I saw seem to be diagrams, so Graphviz would allow replacing
> > > most of them, if not all.  
> > 
> > Please don't replace ASCII art that effectively conveys conceptual
> > diagrams.  If you do, we'll wind up in situations where someone
> > hasn't built the docs and doesn't possess the tools to see a diagram
> > that was previously shown by every text editor (or can't be bothered
> > to dig out the now separate file).  In the name of creating
> > "prettier" diagrams (and final doc), we'll have damaged capacity to
> > understand stuff by just reading the source if this diagram is in
> > kernel doc comments.  I think this is a good application of "if it
> > ain't broke, don't fix it".  

I agree with it as a general rule. Yet, there are cases where the diagram 
is so complex that rewriting it with Graphviz would make sense, like
the one on this document:

Documentation/media/v4l-drivers/pxa_camera.rst

Regards,
Mauro



> 
> Right, I agree completely!
> 
> That's the selling point of aafig though, it translates to pretty
> diagrams, but looks fine when viewed in a normal text editor (with
> fixed-width font)
> 
> I had a hack elsewhere that would embed the fixed-width text if the
> plugin isn't present, which seemed like a decent compromise, but nobody
> is willing to let plugins be used in general to start with, it seems :)
> 
> johannes


Thanks,
Mauro
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-21 Thread James Bottomley
On Mon, 2016-11-21 at 12:06 -0200, Mauro Carvalho Chehab wrote:
> Em Mon, 21 Nov 2016 11:39:41 +0100
> Johannes Berg <johan...@sipsolutions.net> escreveu:
> > On Sat, 2016-11-19 at 10:15 -0700, Jonathan Corbet wrote:
> > 
> > > Rather than beating our heads against the wall trying to convert
> > > between various image formats, maybe we need to take a step
> > > back.  We're trying to build better documentation, and there is
> > > certainly a place for diagrams and such in that
> > > documentation.  Johannes was asking about it for the 802.11 docs, 
> > > and I know Paul has run into these issues with the RCU docs as
> > > well.  Might there be a tool or an extension out there that would
> > > allow us to express these diagrams in a text-friendly, editable
> > > form?
> > > 
> > > With some effort, I bet we could get rid of a number of the 
> > > images, and perhaps end up with something that makes sense when 
> > > read in the .rst source files as an extra benefit.   
> > 
> > I tend to agree, and I think that having this readable in the text
> > would be good.
> > 
> > You had pointed me to this plugin before
> > https://pythonhosted.org/sphinxcontrib-aafig/
> > 
> > but I don't think it can actually represent any of the pictures.
> 
> No, but there are some ascii art images inside some txt/rst files
> and inside some kernel-doc comments. We could either use the above
> extension for them or to convert into some image. The ascii art
> images I saw seem to be diagrams, so Graphviz would allow replacing
> most of them, if not all.

Please don't replace ASCII art that effectively conveys conceptual
diagrams.  If you do, we'll wind up in situations where someone hasn't
built the docs and doesn't possess the tools to see a diagram that was
previously shown by every text editor (or can't be bothered to dig out
the now separate file).  In the name of creating "prettier" diagrams
(and final doc), we'll have damaged capacity to understand stuff by
just reading the source if this diagram is in kernel doc comments.  I
think this is a good application of "if it ain't broke, don't fix it".

James

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-21 Thread Jani Nikula
On Mon, 21 Nov 2016, Johannes Berg  wrote:
> I had a hack elsewhere that would embed the fixed-width text if the
> plugin isn't present, which seemed like a decent compromise, but nobody
> is willing to let plugins be used in general to start with, it seems :)

FWIW I'm all for doing this stuff in Sphinx, with Sphinx extensions. And
to me it sounds like what you describe is interesting outside of kernel
too.

BR,
Jani.

-- 
Jani Nikula, Intel Open Source Technology Center
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-21 Thread Johannes Berg

> > > You had pointed me to this plugin before
> > > https://pythonhosted.org/sphinxcontrib-aafig/
> > > 
> > > but I don't think it can actually represent any of the pictures.
> > 
> > No, but there are some ascii art images inside some txt/rst files
> > and inside some kernel-doc comments. We could either use the above
> > extension for them or to convert into some image. The ascii art
> > images I saw seem to be diagrams, so Graphviz would allow replacing
> > most of them, if not all.
> 
> Please don't replace ASCII art that effectively conveys conceptual
> diagrams.  If you do, we'll wind up in situations where someone
> hasn't built the docs and doesn't possess the tools to see a diagram
> that was previously shown by every text editor (or can't be bothered
> to dig out the now separate file).  In the name of creating
> "prettier" diagrams (and final doc), we'll have damaged capacity to
> understand stuff by just reading the source if this diagram is in
> kernel doc comments.  I think this is a good application of "if it
> ain't broke, don't fix it".

Right, I agree completely!

That's the selling point of aafig though, it translates to pretty
diagrams, but looks fine when viewed in a normal text editor (with
fixed-width font)

I had a hack elsewhere that would embed the fixed-width text if the
plugin isn't present, which seemed like a decent compromise, but nobody
is willing to let plugins be used in general to start with, it seems :)

johannes
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-21 Thread Mauro Carvalho Chehab
Em Mon, 21 Nov 2016 11:39:41 +0100
Johannes Berg <johan...@sipsolutions.net> escreveu:

> On Sat, 2016-11-19 at 10:15 -0700, Jonathan Corbet wrote:
> > 
> > I don't know what the ultimate source of these images is (Mauro,
> > perhaps you could shed some light there?).  
> 
> I'd argue that it probably no longer matters. Whether it's xfig, svg,
> graphviz originally etc. - the source is probably long lost. Recreating
> these images in any other format is probably not very difficult for
> most or almost all of them.

I did it already. I converted one image to Graphviz and the rest
to SVG.

> 
> > Rather than beating our heads against the wall trying to convert
> > between various image formats, maybe we need to take a step
> > back.  We're trying to build better documentation, and there is
> > certainly a place for diagrams and such in that
> > documentation.  Johannes was asking about it for the 802.11 docs, and
> > I know Paul has run into these issues with the RCU docs as
> > well.  Might there be a tool or an extension out there that would
> > allow us to express these diagrams in a text-friendly, editable
> > form?
> > 
> > With some effort, I bet we could get rid of a number of the images,
> > and perhaps end up with something that makes sense when read in the
> > .rst source files as an extra benefit.   
> 
> I tend to agree, and I think that having this readable in the text
> would be good.
> 
> You had pointed me to this plugin before
> https://pythonhosted.org/sphinxcontrib-aafig/
> 
> but I don't think it can actually represent any of the pictures.

No, but there are some ascii art images inside some txt/rst files
and inside some kernel-doc comments. We could either use the above
extension for them or to convert into some image. The ascii art
images I saw seem to be diagrams, so Graphviz would allow replacing
most of them, if not all.

> Some surely could be represented directly by having the graphviz source
> inside the rst file:
> http://www.sphinx-doc.org/en/1.4.8/ext/graphviz.html
> 
> (that's even an included plugin, no need to install anything extra)

Yes, but it seems that the existing plugin mis some things that
the .. figure:: tag does, like allowing to specify the alternate and
placing a caption to the figure (or tables) [1].

Also, as SVG is currently broken on Sphinx for PDF output, we'll
need to pre-process the image before calling Sphinx anyway. So,
IMHO, the best for now would be to use the same approach for both
cases.

On my patchsets, they're doing both SVG and Graphviz handling via
Makefile, before calling Sphinx.

When we have the needed features either at Sphinx upstream or as a
plugin, we could then switch to such solution.

[1] Another missing feature with regards to that is that Sphinx
doesn't seem to be able to produce a list of figures and tables.
Eventually, the image extension (or upstream improvements) that
would implement proper support for SVG and Graphviz could also
implement support for such indexes.

> graphviz is actually quite powerful, so I suspect things like
> dvbstb.png can be represented there, perhaps not pixel-identically, but
> at least semantically equivalently.

Yes, but we'll still need SVG for more complex things. I actually
converted (actually, I rewrote) dvbstb.png as SVG.

> However, I don't think we'll actually find a catch-all solution, so we
> need to continue this discussion here for a fallback anyway - as you
> stated (I snipped that quote, sorry), a picture describing the video
> formats will likely not be representable in text.
> 
> As far as my use-case for sequence diagrams is concerned, I'd really
> like to see this integrated with the toolchain since the source format
> for them is in fact a text format.

Yes, for sure having support for Graphviz will be very useful.

Thanks,
Mauro
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-20 Thread Mauro Carvalho Chehab
Em Sat, 19 Nov 2016 22:59:01 -
"David Woodhouse" <dw...@infradead.org> escreveu:

> > I think that graphviz and svg are the reasonable modern formats. Let's
> > try to avoid bitmaps in today's world, except perhaps as intermediate
> > generated things for what we can't avoid.  

Ok, I got rid of all bitmap images:
https://git.linuxtv.org/mchehab/experimental.git/log/?h=svg-images

Now, all images are in SVG (one is actually a .dot file - while we don't
have an extension to handle it, I opted to keep both .dot and .svg on
my development tree - I'll likely add a Makefile rule for it too).

I converted the ones from pdf/xfig to SVG, and I rewrote the other ones
on SVG. The most complex one was cropping a bitmap image. Instead, I took
the "Tuz" image - e. g. the one from commit 8032b526d1a3 
("linux.conf.au 2009: Tuz") and use it for image crop. The file size is
a way bigger than the previous one (the PNG had 11K; the SVG now has 563K),
but the end result looked nice, IMHO.

> Sure, SVG makes sense. It's a text-based format (albeit XML) and it *can*
> be edited with a text editor and reasonably kept in version control, at
> least if the common tools store it in a diff-friendly way (with some line
> breaks occasionally, and maybe no indenting). Do they?

Inkscape does a good job on breaking lines for a diff-friendly output.

Yet, some lines violate the maximum limit for e-mails defined by
IETF RFC 2821. The problem is that sending such patches to the mailing
lists could make them be ignored.

Not sure what would be the best way to solve such issues.


Thanks,
Mauro
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-19 Thread David Woodhouse

> On Sat, Nov 19, 2016 at 9:55 AM, David Woodhouse 
> wrote:
>>
>> I know it's unfashionable these days, but TeX always used to be bloody
>> good at that kind of thing.
>
> You must have used a different TeX than I did.
>
> TeX is a horrible example. The moment you needed to insert anything
> that TeX didn't know about, you were screwed.
>
> I think my go-to for TeX was LaTeX, the "epsfig" thing, and then xfig
> and eps files (using fig2dev). Christ, I get flashbacks just thinking
> about it.

You're right. You included Epson,  which was generated from fig.

Now I'm having flashbacks too, and I actually remember.

> I thought one of the points of Sphinx was to not have to play those games.
>
> I think that graphviz and svg are the reasonable modern formats. Let's
> try to avoid bitmaps in today's world, except perhaps as intermediate
> generated things for what we can't avoid.

Sure, SVG makes sense. It's a text-based format (albeit XML) and it *can*
be edited with a text editor and reasonably kept in version control, at
least if the common tools store it in a diff-friendly way (with some line
breaks occasionally, and maybe no indenting). Do they?


-- 
dwmw2

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-19 Thread Linus Torvalds
On Sat, Nov 19, 2016 at 12:54 PM, Mauro Carvalho Chehab
<mche...@s-opensource.com> wrote:
>
> I did some research on Friday trying to identify where those images
> came. It turns that, for the oldest images (before I took the media
> maintainership), PDF were actually their "source", as far as I could track,
> in the sense that the *.gif images were produced from the PDF.
>
> The images seem to be generated using some LaTeX tool. Their original
> format were probably EPS.

The original format was almost certainly xfig.

Converting fig files to eps and pdf to then encapsulate them into
LaTeX was a very common way to do documentation with simple figures.
Iirc, xfig natively supported "export as eps".

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-19 Thread Mauro Carvalho Chehab
Em Sat, 19 Nov 2016 10:15:43 -0700
Jonathan Corbet <cor...@lwn.net> escreveu:

> On Thu, 17 Nov 2016 08:02:50 -0800
> Linus Torvalds <torva...@linux-foundation.org> wrote:
> 
> > We have makefiles, but more importantly, few enough people actually
> > *generate* the documentation, that I think if it's an option to just
> > fix sphinx, we should do that instead. If it means that you have to
> > have some development version of sphinx, so be it. Most people read
> > the documentation either directly in the unprocessed text-files
> > ("source code") or on the web (by searching for pre-formatted docs)
> > that I really don't think we need to worry too much about the
> > toolchain.
> > 
> > But what we *should* worry about is having the kernel source tree
> > contain source.  
> 
> I would be happy to take a shot at fixing sphinx; we clearly need to
> engage more with sphinx upstream in general.  But I guess I still haven't
> figured out what "fixing sphinx" means in this case.
> 
> I don't know what the ultimate source of these images is (Mauro, perhaps
> you could shed some light there?).  Perhaps its SVG for some of the
> diagrams, but for the raster images, probably not; it's probably some
> weird-ass diagram-editor format.  We could put those in the tree, but
> they are likely to be harder to convert to a useful format and will raise
> all of the same obnoxious binary patch issues.

I did some research on Friday trying to identify where those images
came. It turns that, for the oldest images (before I took the media
maintainership), PDF were actually their "source", as far as I could track,
in the sense that the *.gif images were produced from the PDF.

The images seem to be generated using some LaTeX tool. Their original
format were probably EPS. I was able to convert those to SVG from their
pdf "source":

    
https://git.linuxtv.org/mchehab/experimental.git/commit/?h=svg-images=9baca9431d333af086c1ccd499668b5b76d35a64

I didn't check yet where the newer images came from, but I guess
that at least some of them were generated using some bitmap editor
like gimp.

> Rather than beating our heads against the wall trying to convert between
> various image formats, maybe we need to take a step back.  We're trying
> to build better documentation, and there is certainly a place for
> diagrams and such in that documentation.  Johannes was asking about it
> for the 802.11 docs, and I know Paul has run into these issues with the
> RCU docs as well.  Might there be a tool or an extension out there that
> would allow us to express these diagrams in a text-friendly, editable
> form?

I guess that a Sphinx extension for graphviz is something that we'll
need sooner or later. One of our images were clearly generated using
graphviz:
Documentation/media/uapi/v4l/pipeline.png

> With some effort, I bet we could get rid of a number of the images, and
> perhaps end up with something that makes sense when read in the .rst
> source files as an extra benefit.  But I'm not convinced that we can,
> say, sensibly express the differences between different video interlacing
> schemes that way.

Explaining visual concepts without images is really hard. Several
images that we use are there to explain things like interlacing,
point (x, y) positions of R, G and B pixels (or YUV), and even
wavelengths to show where the VBI frames are taken. There's not
much we can to do get rid of those images.

We can try to convert those to vector graphics, or encapsulate the bitmaps
inside a SVG file, but still we'll need images on documents.

Thanks,
Mauro
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-19 Thread Linus Torvalds
On Sat, Nov 19, 2016 at 9:55 AM, David Woodhouse  wrote:
>
> I know it's unfashionable these days, but TeX always used to be bloody
> good at that kind of thing.

You must have used a different TeX than I did.

TeX is a horrible example. The moment you needed to insert anything
that TeX didn't know about, you were screwed.

I think my go-to for TeX was LaTeX, the "epsfig" thing, and then xfig
and eps files (using fig2dev). Christ, I get flashbacks just thinking
about it.

I thought one of the points of Sphinx was to not have to play those games.

I think that graphviz and svg are the reasonable modern formats. Let's
try to avoid bitmaps in today's world, except perhaps as intermediate
generated things for what we can't avoid.

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-19 Thread Bart Van Assche
On 11/19/16 09:15, Jonathan Corbet wrote:
> Might there be a tool or an extension out there that would allow us
 > to express these diagrams in a text-friendly, editable form?

How about using the graphviz languages for generating diagrams that can 
be described easily in one of the graphviz languages? The graphviz 
programming languages are well suited for version control. And the 
graphviz software includes a tool for converting diagrams described in a 
graphviz language into many formats, including png, svg and pdf. 
Examples of diagrams generated with graphviz are available at 
http://www.graphviz.org/Gallery.php.

Bart.

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-19 Thread David Woodhouse
On Sat, 2016-11-19 at 10:15 -0700, Jonathan Corbet wrote:
> Might there be a tool or an extension out there that
> would allow us to express these diagrams in a text-friendly, editable
> form?

I know it's unfashionable these days, but TeX always used to be bloody
good at that kind of thing.

-- 
dwmw2


smime.p7s
Description: S/MIME cryptographic signature


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-19 Thread Andrew Lunn
> Rather than beating our heads against the wall trying to convert between
> various image formats, maybe we need to take a step back.  We're trying
> to build better documentation, and there is certainly a place for
> diagrams and such in that documentation.  Johannes was asking about it
> for the 802.11 docs, and I know Paul has run into these issues with the
> RCU docs as well.  Might there be a tool or an extension out there that
> would allow us to express these diagrams in a text-friendly, editable
> form?

Hi Jonathan

A lot depends on what the diagram is supposed to show. I've used
graphviz dot in documents which get processes with Sphinx. That works
well for state machine, linked lists, etc. It uses a mainline Sphinx
extension.

It does however increase the size of your documents toolchain, you
need graphviz. But i doubt there is a distribution which does not have
it.

If you are worried about getting all these needed tools installed, i
think tools/perf might be a useful guide. When you compile it, it
gives helpful hints:

No libdw DWARF unwind found, Please install elfutils-devel/libdw-dev >= 0.158
No sys/sdt.h found, no SDT events are defined, please install 
systemtap-sdt-devel or systemtap-sdt-dev

 Andrew
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-19 Thread Jonathan Corbet
On Thu, 17 Nov 2016 08:02:50 -0800
Linus Torvalds <torva...@linux-foundation.org> wrote:

> We have makefiles, but more importantly, few enough people actually
> *generate* the documentation, that I think if it's an option to just
> fix sphinx, we should do that instead. If it means that you have to
> have some development version of sphinx, so be it. Most people read
> the documentation either directly in the unprocessed text-files
> ("source code") or on the web (by searching for pre-formatted docs)
> that I really don't think we need to worry too much about the
> toolchain.
> 
> But what we *should* worry about is having the kernel source tree
> contain source.

I would be happy to take a shot at fixing sphinx; we clearly need to
engage more with sphinx upstream in general.  But I guess I still haven't
figured out what "fixing sphinx" means in this case.

I don't know what the ultimate source of these images is (Mauro, perhaps
you could shed some light there?).  Perhaps its SVG for some of the
diagrams, but for the raster images, probably not; it's probably some
weird-ass diagram-editor format.  We could put those in the tree, but
they are likely to be harder to convert to a useful format and will raise
all of the same obnoxious binary patch issues.

Rather than beating our heads against the wall trying to convert between
various image formats, maybe we need to take a step back.  We're trying
to build better documentation, and there is certainly a place for
diagrams and such in that documentation.  Johannes was asking about it
for the 802.11 docs, and I know Paul has run into these issues with the
RCU docs as well.  Might there be a tool or an extension out there that
would allow us to express these diagrams in a text-friendly, editable
form?

With some effort, I bet we could get rid of a number of the images, and
perhaps end up with something that makes sense when read in the .rst
source files as an extra benefit.  But I'm not convinced that we can,
say, sensibly express the differences between different video interlacing
schemes that way.

jon
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-18 Thread Daniel Vetter
On Fri, Nov 18, 2016 at 11:15:09AM +0200, Jani Nikula wrote:
> On Thu, 17 Nov 2016, Linus Torvalds  wrote:
> > We have makefiles, but more importantly, few enough people actually
> > *generate* the documentation, that I think if it's an option to just
> > fix sphinx, we should do that instead. If it means that you have to
> > have some development version of sphinx, so be it. Most people read
> > the documentation either directly in the unprocessed text-files
> > ("source code") or on the web (by searching for pre-formatted docs)
> > that I really don't think we need to worry too much about the
> > toolchain.
> 
> My secret plan was to make building documentation easy, and then trick
> more people into actually doing that on a regular basis, to ensure we
> keep the build working and the output sensible in a variety of
> environments. Sure we have a bunch of people doing this, and we have
> 0day doing this, but I'd hate it if it became laborous and fiddly to set
> up the toolchain to generate documentation.
> 
> So I'm not necessarily disagreeing with anything you say, but I think
> there's value in having a low bar for entry (from the toolchain POV) for
> people interested in working with documentation, whether they're
> seasoned kernel developers or newcomers purely interested in
> documentation.

Yeah, I want a low bar for doc building too. The initial hack fest we had
to add cross-linking and other dearly needed stuff increased the build
time so much that everyone stopped creating docs. It's of course not as
extreme, but stating that "no one runs the doc toolchain" is imo akin to
"no one runs gcc, developers just read the source and users install rpms".
It's totally true, until you try to change stuff, then you have to be able
to build that pile fast and with an easily-obtained toolchain.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-18 Thread Jani Nikula
On Thu, 17 Nov 2016, Linus Torvalds  wrote:
> We have makefiles, but more importantly, few enough people actually
> *generate* the documentation, that I think if it's an option to just
> fix sphinx, we should do that instead. If it means that you have to
> have some development version of sphinx, so be it. Most people read
> the documentation either directly in the unprocessed text-files
> ("source code") or on the web (by searching for pre-formatted docs)
> that I really don't think we need to worry too much about the
> toolchain.

My secret plan was to make building documentation easy, and then trick
more people into actually doing that on a regular basis, to ensure we
keep the build working and the output sensible in a variety of
environments. Sure we have a bunch of people doing this, and we have
0day doing this, but I'd hate it if it became laborous and fiddly to set
up the toolchain to generate documentation.

So I'm not necessarily disagreeing with anything you say, but I think
there's value in having a low bar for entry (from the toolchain POV) for
people interested in working with documentation, whether they're
seasoned kernel developers or newcomers purely interested in
documentation.

BR,
Jani.

-- 
Jani Nikula, Intel Open Source Technology Center
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-17 Thread Linus Torvalds
On Thu, Nov 17, 2016 at 3:07 AM, Arnd Bergmann  wrote:
>
> [adding Linus for clarification]
>
> I understood the concern as being about binary files that you cannot
> modify with classic 'patch', which is a separate issue.

No. That is how I *noticed* the issue. Those stupid pdf binary files
have been around forever, I just didn't notice until the Fedora people
started complaining about the patches.

My real problem is not "binary" or even "editable", but SOURCE CODE.

It's like including a "vmlinux" image in the git tree: sure, you can
technically "edit" it with a hex editor, but that doesn't change the
basic issue: it's not the original source code.

I don't want to see generated binary crap.

That goes for png, that goes for gif, that goes for pdf - and in fact
that goes for svg *too*, if the actual source of the svg was something
else, and it was generated from some other data.

We have makefiles, but more importantly, few enough people actually
*generate* the documentation, that I think if it's an option to just
fix sphinx, we should do that instead. If it means that you have to
have some development version of sphinx, so be it. Most people read
the documentation either directly in the unprocessed text-files
("source code") or on the web (by searching for pre-formatted docs)
that I really don't think we need to worry too much about the
toolchain.

But what we *should* worry about is having the kernel source tree
contain source.

 Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-17 Thread James Bottomley
On Thu, 2016-11-17 at 13:16 -0200, Mauro Carvalho Chehab wrote:
> Hi Ted,
> 
> Em Thu, 17 Nov 2016 09:52:44 -0500
> Theodore Ts'o <ty...@mit.edu> escreveu:
> 
> > On Thu, Nov 17, 2016 at 12:07:15PM +0100, Arnd Bergmann wrote:
> > > [adding Linus for clarification]
> > > 
> > > I understood the concern as being about binary files that you
> > > cannot
> > > modify with classic 'patch', which is a separate issue.  
> > 
> > I think the other complaint is that the image files aren't "source"
> > in
> > the proper term, since they are *not* the preferred form for
> > modification --- that's the svg files.  Beyond the license
> > compliance
> > issues (which are satisified because the .svg files are included in
> > the git tree), there is the SCM cleaniless argument of not
> > including
> > generated files in the distribution, since this increases the
> > opportunites for the "real" source file and the generated source
> > file
> > to get out of sync.  (As just one example, if the patch can't
> > represent the change to binary file.)
> > 
> > I do check in generated files on occasion --- usually because I
> > don't
> > trust autoconf to be a stable in terms of generating a correct
> > configure file from a configure.in across different versions of
> > autoconf and different macro libraries that might be installed on
> > the
> > system.  So this isn't a hard and fast rule by any means (although
> > Linus may be more strict than I on that issue).
> > 
> > I don't understand why it's so terrible to have generate the image
> > file from the .svg file in a Makefile rule, and then copy it
> > somewhere
> > else if Sphinx is too dumb to fetch it from the normal location?
> 
> The images whose source are in .svg are now generated via Makefile
> for the PDF output (after my patches, already applied to the docs
> -next
> tree).
> 
> So, the problem that remains is for those images whose source
> is a bitmap. If we want to stick with the Sphinx supported formats,
> we have only two options for bitmaps: jpg or png. We could eventually
> use uuencode or base64 to make sure that the patches won't use
> git binary diff extension, or, as Arnd proposed, use a portable
> bitmap format, in ascii, converting via Makefile, but losing
> the alpha channel with makes the background transparent.

If it can use svg, why not use that?  SVG files can be a simple xml
wrapper around a wide variety of graphic image formats which are
embedded in the svg using the data-uri format, you know ...

Anything that handles SVGs should be able to handle all the embeddable
image formats, which should give you a way around image restrictions
whatever it is would otherwise have.

James

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-17 Thread Theodore Ts'o
On Thu, Nov 17, 2016 at 12:07:15PM +0100, Arnd Bergmann wrote:
> [adding Linus for clarification]
> 
> I understood the concern as being about binary files that you cannot
> modify with classic 'patch', which is a separate issue.

I think the other complaint is that the image files aren't "source" in
the proper term, since they are *not* the preferred form for
modification --- that's the svg files.  Beyond the license compliance
issues (which are satisified because the .svg files are included in
the git tree), there is the SCM cleaniless argument of not including
generated files in the distribution, since this increases the
opportunites for the "real" source file and the generated source file
to get out of sync.  (As just one example, if the patch can't
represent the change to binary file.)

I do check in generated files on occasion --- usually because I don't
trust autoconf to be a stable in terms of generating a correct
configure file from a configure.in across different versions of
autoconf and different macro libraries that might be installed on the
system.  So this isn't a hard and fast rule by any means (although
Linus may be more strict than I on that issue).

I don't understand why it's so terrible to have generate the image
file from the .svg file in a Makefile rule, and then copy it somewhere
else if Sphinx is too dumb to fetch it from the normal location?

   - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-17 Thread Johannes Berg

> So, the problem that remains is for those images whose source
> is a bitmap. If we want to stick with the Sphinx supported formats,
> we have only two options for bitmaps: jpg or png. We could eventually
> use uuencode or base64 to make sure that the patches won't use
> git binary diff extension, or, as Arnd proposed, use a portable
> bitmap format, in ascii, converting via Makefile, but losing
> the alpha channel with makes the background transparent.
> 

Or just "rewrite" them in svg? None of the gif files I can see actually
look like they'd have been drawn in gif format anyway. The original
source may be lost, but it doesn't seem all that hard to recreate them
in svg.

johannes
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-17 Thread Mauro Carvalho Chehab
Hi Ted,

Em Thu, 17 Nov 2016 09:52:44 -0500
Theodore Ts'o <ty...@mit.edu> escreveu:

> On Thu, Nov 17, 2016 at 12:07:15PM +0100, Arnd Bergmann wrote:
> > [adding Linus for clarification]
> > 
> > I understood the concern as being about binary files that you cannot
> > modify with classic 'patch', which is a separate issue.  
> 
> I think the other complaint is that the image files aren't "source" in
> the proper term, since they are *not* the preferred form for
> modification --- that's the svg files.  Beyond the license compliance
> issues (which are satisified because the .svg files are included in
> the git tree), there is the SCM cleaniless argument of not including
> generated files in the distribution, since this increases the
> opportunites for the "real" source file and the generated source file
> to get out of sync.  (As just one example, if the patch can't
> represent the change to binary file.)
> 
> I do check in generated files on occasion --- usually because I don't
> trust autoconf to be a stable in terms of generating a correct
> configure file from a configure.in across different versions of
> autoconf and different macro libraries that might be installed on the
> system.  So this isn't a hard and fast rule by any means (although
> Linus may be more strict than I on that issue).
> 
> I don't understand why it's so terrible to have generate the image
> file from the .svg file in a Makefile rule, and then copy it somewhere
> else if Sphinx is too dumb to fetch it from the normal location?

The images whose source are in .svg are now generated via Makefile
for the PDF output (after my patches, already applied to the docs-next
tree).

So, the problem that remains is for those images whose source
is a bitmap. If we want to stick with the Sphinx supported formats,
we have only two options for bitmaps: jpg or png. We could eventually
use uuencode or base64 to make sure that the patches won't use
git binary diff extension, or, as Arnd proposed, use a portable
bitmap format, in ascii, converting via Makefile, but losing
the alpha channel with makes the background transparent.

Thanks,
Mauro
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-17 Thread Mauro Carvalho Chehab
Em Thu, 17 Nov 2016 12:07:15 +0100
Arnd Bergmann <a...@arndb.de> escreveu:

> On Wednesday, November 16, 2016 6:26:33 PM CET Mauro Carvalho Chehab wrote:
> > Em Wed, 16 Nov 2016 17:03:47 +0100
> > Arnd Bergmann <a...@arndb.de> escreveu:
> >   
> > > On Tuesday, November 8, 2016 8:50:36 AM CET Mauro Carvalho Chehab wrote:  

> > From what I understood from Linus, his problem is to carry on a
> > non-editable file at the Kernel tree. With that sense, a PNG file
> > is OK, as it is editable.  

Btw, Sphinx is indeed supporting PNG without external conversions, for
both html and pdf. For reference, those are the formats it supports
for images:

http://www.sphinx-doc.org/en/1.4.8/builders.html

There are just two formats supported for the output types we use:
PNG and JPEG.


> [adding Linus for clarification]
> 
> I understood the concern as being about binary files that you cannot
> modify with classic 'patch', which is a separate issue.

I don't think this is a big issue, as nowadays everyone uses git.

Also, this could be solved the other way around: someone could send a
patch to "patch" adding support for binary patches in the git format.

Also, images usually don't change that much. For example, the
tux logo only had 3 patches in git:
3d4f16348b77 Revert "linux.conf.au 2009: Tuz"
8032b526d1a3 linux.conf.au 2009: Tuz
1da177e4c3f4 (tag: v2.6.12-rc2) Linux-2.6.12-rc2

Some other examples:

$ git log --oneline ./Documentation/blockdev/drbd/DRBD-data-packets.svg
b411b3637fa7 The DRBD driver

$ git log --oneline 
./Documentation/RCU/Design/Data-Structures/HugeTreeClassicRCU.svg
5c1458478c49 documentation: Add documentation for RCU's major data structures

The media images changed a little bit more, but due to the recent
documentation efforts. Usually, it takes years for someone to touch
them. If you look on it at Kernel v4.7, for example, even the media
images didn't have any changes, except due to dir renames:

$ git log --oneline 
./Documentation/DocBook/media/v4l/subdev-image-processing-full.svg
59ef29cc86af [media] v4l: Add subdev selections documentation: svg and dia files

$ git log --oneline --follow ./Documentation/DocBook/media/v4l/fieldseq_bt.pdf
4266129964b8 [media] DocBook: Move all media docbook stuff into its own 
directory
8e080c2e6cad V4L/DVB (12761): DocBook: add media API specs

$ git log --oneline --follow ./Documentation/DocBook/media/v4l/*.gif
bd7319dc325a [media] DocBook: Use base64 for gif/png files
4266129964b8 [media] DocBook: Move all media docbook stuff into its own 
directory

$ git log --oneline --follow Documentation/DocBook/media/bayer.png.b64
bd7319dc325a [media] DocBook: Use base64 for gif/png files

> > I had, in the past, problems with binary contents on either Mercurial
> > or git (before migrating to git, we used Mercurial for a while).
> > So, before Kernel 4.8, those .pdf, .png (and .gif) images were uuencoded,
> > in order to avoid troubles handling patches with them.
> > 
> > Nowadays, I don't see any issue handling binary images via e-mail or via 
> > git.  
> 
> > Btw, with that regards, SVG images are a lot worse to handle, as a single
> > line can easily have more than 998 characters, with makes some email
> > servers to reject patches with them. So, at the version 3 of my patch 
> > series, I had to use inkscape to ungroup some images, and to rewrite their
> > files, as otherwise, two patches were silently rejected by the VGER 
> > server.  
> 
> Ok, good to know.
> 
> > [1] The reason to convert to PNG is that it means one less format to be
> > concerned with. Also, it doesn't make much sense to use two different
> > formats for bitmap images at the documentation.  
> 
> I just tried converting all the .gif and .png files to .pnm. This would
> make the files patchable but also add around 25MB to the uncompressed
> kernel source tree (118kb compressed, compared to 113kb for the .gif and
> .png files). This is certainly worse than the uuencoded files you
> had before

There's also another drawback: PNM doesn't allow transparent background.
Some images have transparent backgrounds. So, such conversion would lose
it. Also, PNM is not supported on Sphinx, so it would require external
conversion, just like svg.

IMHO, if we're willing to make easier to use patch, the best is to use
uuencode (or base64). In that case, It could make sense to use it for svg 
too, in order to solve the warn of patches that contain lines longer than
998 characters (with is a violation to IETF rfc 2821).

The drawback of uuencode/base64 is that it makes harder to edit the files.
So, IMHO, I would keep them in binary format.

Thanks,
Mauro
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-17 Thread Mauro Carvalho Chehab
Em Thu, 17 Nov 2016 13:28:29 +0200
Jani Nikula <jani.nik...@intel.com> escreveu:

> On Thu, 17 Nov 2016, Arnd Bergmann <a...@arndb.de> wrote:
> > On Wednesday, November 16, 2016 6:26:33 PM CET Mauro Carvalho Chehab wrote: 
> >  
> >> Em Wed, 16 Nov 2016 17:03:47 +0100
> >> Arnd Bergmann <a...@arndb.de> escreveu:
> >>   
> >> > On Tuesday, November 8, 2016 8:50:36 AM CET Mauro Carvalho Chehab wrote: 
> >> >  

> >> > During the kernel summit, I looked around for any binary files in
> >> > the kernel source tree, and except for the penguin logo, they are
> >> > all in Documentation/media/uapi/v4l/, but they are not all pdf
> >> > files, but also .png and .pdf.  
> >> 
> >> From what I understood from Linus, his problem is to carry on a
> >> non-editable file at the Kernel tree. With that sense, a PNG file
> >> is OK, as it is editable.  

Btw, Sphinx is indeed supporting PNG without external conversions, for
both html and pdf. For reference, those are the formats it supports
for images:

http://www.sphinx-doc.org/en/1.4.8/builders.html

There are just two formats supported for the output types we use:
PNG and JPEG. Any other format requires external conversion.

> >
> > [adding Linus for clarification]
> >
> > I understood the concern as being about binary files that you cannot
> > modify with classic 'patch', which is a separate issue.  

I don't think this is a big issue, as nowadays everyone uses git.

> 
> Also reported at [1]. So kernel.org has patches that you can't apply
> with either classic patch or git apply. They could at least be in git
> binary format so you could apply them with *something*. Of course, not
> having binaries at all would be clean.
> 
> [1] http://lkml.kernel.org/r/02a78907-933d-3f61-572e-28154b16b...@redhat.com

This could be solved the other way around: someone could send a patch
to "patch" adding support for binary patches in the git format.

The thing is: images usually don't change that much. For example, the
tux logo only had 3 patches in git:
3d4f16348b77 Revert "linux.conf.au 2009: Tuz"
8032b526d1a3 linux.conf.au 2009: Tuz
1da177e4c3f4 (tag: v2.6.12-rc2) Linux-2.6.12-rc2

Some other examples:

$ git log --oneline ./Documentation/blockdev/drbd/DRBD-data-packets.svg
b411b3637fa7 The DRBD driver

$ git log --oneline 
./Documentation/RCU/Design/Data-Structures/HugeTreeClassicRCU.svg
5c1458478c49 documentation: Add documentation for RCU's major data structures

The media images changed a little bit more, but due to the recent
documentation efforts. Usually, it takes years for someone to touch
them. If you look on it at Kernel v4.7, for example, even the media
images didn't have any changes, except due to dir renames or when
the files got encoded with base64:

$ git log --oneline -- v4.7 
./Documentation/DocBook/media/v4l/subdev-image-processing-full.svg
59ef29cc86af [media] v4l: Add subdev selections documentation: svg and dia files
$ git checkout v4.7
$ git log --oneline --follow ./Documentation/DocBook/media/v4l/fieldseq_bt.pdf
4266129964b8 [media] DocBook: Move all media docbook stuff into its own 
directory
8e080c2e6cad V4L/DVB (12761): DocBook: add media API specs
$ git log --oneline --follow ./Documentation/DocBook/media/v4l/*.gif
bd7319dc325a [media] DocBook: Use base64 for gif/png files
4266129964b8 [media] DocBook: Move all media docbook stuff into its own 
directory
$ git log --oneline --follow ./Documentation/DocBook/media/bayer.png.b64
bd7319dc325a [media] DocBook: Use base64 for gif/png files

> >> I had, in the past, problems with binary contents on either Mercurial
> >> or git (before migrating to git, we used Mercurial for a while).
> >> So, before Kernel 4.8, those .pdf, .png (and .gif) images were uuencoded,
> >> in order to avoid troubles handling patches with them.
> >> 
> >> Nowadays, I don't see any issue handling binary images via e-mail or via 
> >> git.  
> >
> >
> >  
> >> Btw, with that regards, SVG images are a lot worse to handle, as a single
> >> line can easily have more than 998 characters, with makes some email
> >> servers to reject patches with them. So, at the version 3 of my patch 
> >> series, I had to use inkscape to ungroup some images, and to rewrite their
> >> files, as otherwise, two patches were silently rejected by the VGER 
> >> server.  
> >
> > Ok, good to know.
> >  
> >> [1] The reason to convert to PNG is that it means one less format to be
> >> concerned with. Also, it doesn't make much sense to use two different
> >> formats for bitmap images at the documentation.  
> >
> 

Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-17 Thread Linus Torvalds
On Thu, Nov 17, 2016 at 8:02 AM, Linus Torvalds
 wrote:
>
> No. That is how I *noticed* the issue. Those stupid pdf binary files
> have been around forever, I just didn't notice until the Fedora people
> started complaining about the patches.

Side note: my release patches these days enable both "--binary" and
"-M", so they require "git apply" now. So we handle the binaries fine
in patches now, but as mentioned, that was just what made me notice,
it wasn't a fix for the deeper ("source") problem.

   Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-17 Thread Jani Nikula
On Thu, 17 Nov 2016, Arnd Bergmann <a...@arndb.de> wrote:
> On Wednesday, November 16, 2016 6:26:33 PM CET Mauro Carvalho Chehab wrote:
>> Em Wed, 16 Nov 2016 17:03:47 +0100
>> Arnd Bergmann <a...@arndb.de> escreveu:
>> 
>> > On Tuesday, November 8, 2016 8:50:36 AM CET Mauro Carvalho Chehab wrote:
>> > > It basically calls ImageMagick "convert" tool for all png and
>> > > pdf files currently at the documentation (they're all at media,
>> > > ATM).  
>> > 
>> > It looks like we still need to find a way to address the .gif files
>> > though, as they have the same problem as the .pdf files.
>> 
>> Actually, my last patch series removed all *.pdf images and converted
>> all .gif files under Documentation/media to PNG[1]. I also replaced some
>> images by .svg, but the remaining ones are more complex. I'm even not
>> sure if it makes sense to convert a few of them to vectorial graphics,
>> like on this case:
>>  https://mchehab.fedorapeople.org/kernel_docs/media/_images/selection.png
>> 
>> >
>> > During the kernel summit, I looked around for any binary files in
>> > the kernel source tree, and except for the penguin logo, they are
>> > all in Documentation/media/uapi/v4l/, but they are not all pdf
>> > files, but also .png and .pdf.
>> 
>> From what I understood from Linus, his problem is to carry on a
>> non-editable file at the Kernel tree. With that sense, a PNG file
>> is OK, as it is editable.
>
> [adding Linus for clarification]
>
> I understood the concern as being about binary files that you cannot
> modify with classic 'patch', which is a separate issue.

Also reported at [1]. So kernel.org has patches that you can't apply
with either classic patch or git apply. They could at least be in git
binary format so you could apply them with *something*. Of course, not
having binaries at all would be clean.

BR,
Jani.


[1] http://lkml.kernel.org/r/02a78907-933d-3f61-572e-28154b16b...@redhat.com

>
>> I had, in the past, problems with binary contents on either Mercurial
>> or git (before migrating to git, we used Mercurial for a while).
>> So, before Kernel 4.8, those .pdf, .png (and .gif) images were uuencoded,
>> in order to avoid troubles handling patches with them.
>> 
>> Nowadays, I don't see any issue handling binary images via e-mail or via git.
>
>
>
>> Btw, with that regards, SVG images are a lot worse to handle, as a single
>> line can easily have more than 998 characters, with makes some email
>> servers to reject patches with them. So, at the version 3 of my patch 
>> series, I had to use inkscape to ungroup some images, and to rewrite their
>> files, as otherwise, two patches were silently rejected by the VGER 
>> server.
>
> Ok, good to know.
>
>> [1] The reason to convert to PNG is that it means one less format to be
>> concerned with. Also, it doesn't make much sense to use two different
>> formats for bitmap images at the documentation.
>
> I just tried converting all the .gif and .png files to .pnm. This would
> make the files patchable but also add around 25MB to the uncompressed
> kernel source tree (118kb compressed, compared to 113kb for the .gif and
> .png files). This is certainly worse than the uuencoded files you
> had before
>
>   Arnd
> ___
> Ksummit-discuss mailing list
> ksummit-disc...@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss

-- 
Jani Nikula, Intel Open Source Technology Center
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-17 Thread Arnd Bergmann
On Wednesday, November 16, 2016 6:26:33 PM CET Mauro Carvalho Chehab wrote:
> Em Wed, 16 Nov 2016 17:03:47 +0100
> Arnd Bergmann <a...@arndb.de> escreveu:
> 
> > On Tuesday, November 8, 2016 8:50:36 AM CET Mauro Carvalho Chehab wrote:
> > > It basically calls ImageMagick "convert" tool for all png and
> > > pdf files currently at the documentation (they're all at media,
> > > ATM).  
> > 
> > It looks like we still need to find a way to address the .gif files
> > though, as they have the same problem as the .pdf files.
> 
> Actually, my last patch series removed all *.pdf images and converted
> all .gif files under Documentation/media to PNG[1]. I also replaced some
> images by .svg, but the remaining ones are more complex. I'm even not
> sure if it makes sense to convert a few of them to vectorial graphics,
> like on this case:
>   https://mchehab.fedorapeople.org/kernel_docs/media/_images/selection.png
> 
> >
> > During the kernel summit, I looked around for any binary files in
> > the kernel source tree, and except for the penguin logo, they are
> > all in Documentation/media/uapi/v4l/, but they are not all pdf
> > files, but also .png and .pdf.
> 
> From what I understood from Linus, his problem is to carry on a
> non-editable file at the Kernel tree. With that sense, a PNG file
> is OK, as it is editable.

[adding Linus for clarification]

I understood the concern as being about binary files that you cannot
modify with classic 'patch', which is a separate issue.

> I had, in the past, problems with binary contents on either Mercurial
> or git (before migrating to git, we used Mercurial for a while).
> So, before Kernel 4.8, those .pdf, .png (and .gif) images were uuencoded,
> in order to avoid troubles handling patches with them.
> 
> Nowadays, I don't see any issue handling binary images via e-mail or via git.



> Btw, with that regards, SVG images are a lot worse to handle, as a single
> line can easily have more than 998 characters, with makes some email
> servers to reject patches with them. So, at the version 3 of my patch 
> series, I had to use inkscape to ungroup some images, and to rewrite their
> files, as otherwise, two patches were silently rejected by the VGER 
> server.

Ok, good to know.

> [1] The reason to convert to PNG is that it means one less format to be
> concerned with. Also, it doesn't make much sense to use two different
> formats for bitmap images at the documentation.

I just tried converting all the .gif and .png files to .pnm. This would
make the files patchable but also add around 25MB to the uncompressed
kernel source tree (118kb compressed, compared to 113kb for the .gif and
.png files). This is certainly worse than the uuencoded files you
had before

Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-16 Thread Mauro Carvalho Chehab
Hi Arnd,

Em Wed, 16 Nov 2016 17:03:47 +0100
Arnd Bergmann <a...@arndb.de> escreveu:

> On Tuesday, November 8, 2016 8:50:36 AM CET Mauro Carvalho Chehab wrote:
> > It basically calls ImageMagick "convert" tool for all png and
> > pdf files currently at the documentation (they're all at media,
> > ATM).  
> 
> It looks like we still need to find a way to address the .gif files
> though, as they have the same problem as the .pdf files.

Actually, my last patch series removed all *.pdf images and converted
all .gif files under Documentation/media to PNG[1]. I also replaced some
images by .svg, but the remaining ones are more complex. I'm even not
sure if it makes sense to convert a few of them to vectorial graphics,
like on this case:
https://mchehab.fedorapeople.org/kernel_docs/media/_images/selection.png

>
> During the kernel summit, I looked around for any binary files in
> the kernel source tree, and except for the penguin logo, they are
> all in Documentation/media/uapi/v4l/, but they are not all pdf
> files, but also .png and .pdf.

>From what I understood from Linus, his problem is to carry on a
non-editable file at the Kernel tree. With that sense, a PNG file
is OK, as it is editable.

I had, in the past, problems with binary contents on either Mercurial
or git (before migrating to git, we used Mercurial for a while).
So, before Kernel 4.8, those .pdf, .png (and .gif) images were uuencoded,
in order to avoid troubles handling patches with them.

Nowadays, I don't see any issue handling binary images via e-mail or via git.

Btw, with that regards, SVG images are a lot worse to handle, as a single
line can easily have more than 998 characters, with makes some email
servers to reject patches with them. So, at the version 3 of my patch 
series, I had to use inkscape to ungroup some images, and to rewrite their
files, as otherwise, two patches were silently rejected by the VGER 
server.

[1] The reason to convert to PNG is that it means one less format to be
concerned with. Also, it doesn't make much sense to use two different
formats for bitmap images at the documentation.

Thanks,
Mauro
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-16 Thread Arnd Bergmann
On Tuesday, November 8, 2016 8:50:36 AM CET Mauro Carvalho Chehab wrote:
> > [...]
> > > And it may even require "--shell-escape" to be passed at the xelatex
> > > call if inkscape is not in the path, with seems to be a strong
> > > indication that SVG support is not native to texlive, but, instead,
> > > just a way to make LaTeX to call inkscape to do the image conversion.  
> > 
> > Please don't require --shell-escape as part of the TeX workflow.  If
> > LaTeX can't handle the desired image format natively, it needs
> > conversion in advance.
> 
> Agreed. I sent a patch series to linux-doc, doing the conversion in
> advance:
> https://marc.info/?l=linux-doc=147859902804144=2
> 
> Not sure why, but the archives don't have all patches yet.
> Anyway, the relevant one is this:
> 
> https://git.linuxtv.org/mchehab/experimental.git/commit/?h=pdf-fixes=5d41c452c787f6a6c755a3855312435bc439acb8
> 
> It basically calls ImageMagick "convert" tool for all png and
> pdf files currently at the documentation (they're all at media,
> ATM).

It looks like we still need to find a way to address the .gif files
though, as they have the same problem as the .pdf files.

During the kernel summit, I looked around for any binary files in
the kernel source tree, and except for the penguin logo, they are
all in Documentation/media/uapi/v4l/, but they are not all pdf
files, but also .png and .pdf.

Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Including images on Sphinx documents

2016-11-14 Thread Mauro Carvalho Chehab
Em Sun, 13 Nov 2016 14:00:27 -0700
Jonathan Corbet <cor...@lwn.net> escreveu:

> On Mon, 7 Nov 2016 09:46:48 -0200
> Mauro Carvalho Chehab <mche...@s-opensource.com> wrote:
> 
> > When running LaTeX in interactive mode, building just the media
> > PDF file with:
> > 
> > $ cls;make cleandocs; make SPHINXOPTS="-j5" DOCBOOKS="" 
> > SPHINXDIRS=media latexdocs 
> > $ PDFLATEX=xelatex LATEXOPTS="-interaction=interactive" -C 
> > Documentation/output/media/latex
> > 
> > I get this:
> > 
> > LaTeX Warning: Hyper reference `uapi/v4l/subdev-formats:bayer-patterns' 
> > on page
> >  153 undefined on input line 21373.
> > 
> >  [153]
> > ! Extra alignment tab has been changed to \cr.
> >  \endtemplate 
> > 
> > l.21429 \unskip}\relax \unskip}
> >\relax \\
> > ? 
> > 
> > This patch fixes the issue:
> > 
> > https://git.linuxtv.org/mchehab/experimental.git/commit/?h=dirty-pdf=b709de415f34d77cc121cad95bece9c7ef4d12fd
> > 
> > That means that Sphinx is not generating the right LaTeX output even for
> > (some?) PNG images.  
> 
> So I'm seriously confused.
> 
> I can get that particular message - TeX is complaining about too many
> columns in the table.  But applying your patch (with a suitable bayer.pdf
> provided) does not fix the problem.  Indeed, I can remove the figure with
> the image entirely and still not fix the problem.  Are you sure that the
> patch linked here actually fixed it for you?

There are two patches on the series fixing the column issues for the Bayer
formats table on the /6 patch series.

I guess I got confused with that, because of this warning:

LaTeX Warning: Hyper reference `uapi/v4l/subdev-formats:bayer-patterns' 
on page
 153 undefined on input line 21373.

Anyway, you're right: PNG is indeed working fine[1], and the
cross-reference is OK. The issue is just with SVG.

I'll fold it with patch 6/6 and submit a new series.

Sorry for the mess.

[1] I tested now on sphinx 1.4.8 - I was using an older version
before (1.4.5, I guess).

Thanks,
Mauro

[PATCH] docs-rst: Let Sphinx do PNG image conversion

Signed-off-by: Mauro Carvalho Chehab <mche...@s-opensource.com>

diff --git a/Documentation/media/Makefile b/Documentation/media/Makefile
index 0f08bd8b87ba..297b85c37ab9 100644
--- a/Documentation/media/Makefile
+++ b/Documentation/media/Makefile
@@ -12,17 +12,6 @@ TARGETS := $(addprefix $(BUILDDIR)/, $(FILES))
 
 IMAGES = \
typical_media_device.svg \
-   uapi/v4l/fieldseq_tb.png \
-   uapi/v4l/selection.png \
-   uapi/v4l/vbi_hsync.png \
-   uapi/v4l/fieldseq_bt.png \
-   uapi/v4l/crop.png \
-   uapi/v4l/nv12mt.png \
-   uapi/v4l/vbi_525.png \
-   uapi/v4l/nv12mt_example.png \
-   uapi/v4l/vbi_625.png \
-   uapi/v4l/pipeline.png \
-   uapi/v4l/bayer.png \
uapi/dvb/dvbstb.svg \
uapi/v4l/constraints.svg \
uapi/v4l/subdev-image-processing-full.svg \
@@ -37,8 +26,6 @@ cmd = $(echo-cmd) $(cmd_$(1))
 quiet_cmd_genpdf = GENPDF  $2
   cmd_genpdf = convert $2 $3
 
-%.pdf: %.png
-   @$(call cmd,genpdf,$<,$@)
 %.pdf: %.svg
@$(call cmd,genpdf,$<,$@)
 
diff --git a/Documentation/media/uapi/v4l/crop.rst 
b/Documentation/media/uapi/v4l/crop.rst
index 31c5ba5ebd04..578c6f3d20f3 100644
--- a/Documentation/media/uapi/v4l/crop.rst
+++ b/Documentation/media/uapi/v4l/crop.rst
@@ -53,8 +53,8 @@ Cropping Structures
 
 .. _crop-scale:
 
-.. figure::  crop.*
-:alt:crop.pdf / crop.png
+.. figure::  crop.png
+:alt:crop.png
 :align:  center
 
 Image Cropping, Insertion and Scaling
diff --git a/Documentation/media/uapi/v4l/dev-raw-vbi.rst 
b/Documentation/media/uapi/v4l/dev-raw-vbi.rst
index fb4d9c4098a0..f81d906137ee 100644
--- a/Documentation/media/uapi/v4l/dev-raw-vbi.rst
+++ b/Documentation/media/uapi/v4l/dev-raw-vbi.rst
@@ -221,8 +221,8 @@ and always returns default parameters as :ref:`VIDIOC_G_FMT 
` does
 
 .. _vbi-hsync:
 
-.. figure::  vbi_hsync.*
-:alt:vbi_hsync.pdf / vbi_hsync.png
+.. figure::  vbi_hsync.png
+:alt:vbi_hsync.png
 :align:  center
 
 **Figure 4.1. Line synchronization**
@@ -230,8 +230,8 @@ and always returns default parameters as :ref:`VIDIOC_G_FMT 
` does
 
 .. _vbi-525:
 
-.. figure::  vbi_525.*
-:alt:vbi_525.pdf / vbi_525.png
+.. figure::  vbi_525.png
+:alt:vbi_525.png
 :align:  center
 
 **Figure 4.2. ITU-R 525 line numbering (M/NTSC and M/PAL)**
@@ -240,8 +240,8 @@ and always returns default parameters as :ref:`VIDIOC_G_FMT 
` does
 
 .. _vbi-625:
 
-.. figure::  vbi_625.*
-:alt:vbi_625.pdf / vbi_625.png
+.. figure::  vbi_625.png
+:alt:vbi_625.

Re: Including images on Sphinx documents

2016-11-14 Thread Mauro Carvalho Chehab
Em Sun, 13 Nov 2016 12:52:50 -0700
Jonathan Corbet <cor...@lwn.net> escreveu:

> On Mon, 7 Nov 2016 07:55:24 -0200
> Mauro Carvalho Chehab <mche...@s-opensource.com> wrote:
> 
> > So, we have a few alternatives:
> > 
> > 1) copy (or symlink) all rst files to Documentation/output (or to the
> >build dir specified via O= directive) and generate the *.pdf there,
> >and produce those converted images via Makefile.;
> > 
> > 2) add an Sphinx extension that would internally call ImageMagick and/or
> >inkscape to convert the bitmap;
> > 
> > 3) if possible, add an extension to trick Sphinx for it to consider the 
> >output dir as a source dir too.  
> 
> So, obviously, I've been letting this go by while dealing with other
> stuff...
> 
> I really think that 2) is the one we want.  Copying all the stuff and
> operating on the copies, beyond being a bit of a hack, just seems like a
> recipe for weird build problems in the future.

Yes, (2) sounds to be the best option.

> We should figure out why PNG files don't work.  Maybe I'll give that a
> try at some point soon, if I can find a moment.  Working around tools
> bugs seems like the wrong approach.

I appreciate any efforts on that.

> Working from .svg seems optimial, but I don't like the --shell-escape
> thing at all.
> 
> [Along those lines, we've picked up a lot of lines like this:
> 
>restricted \write18 enabled.
> 
> That, too, is shell execution stuff.  I've not been able to figure out
> where it came from, but I would sure like to get rid of it...]

Didn't know that! I'm new to LaTeX. Frankly, the log output sounds
very scary to me, as there are lots of warnings there, and debugging
each of them takes time. I don't see any \write18 inside the generated
.tex files or inside the sphinx.sty file. Perhaps it comes from some
Tex extension, like adjustbox?

> 
> jon



Thanks,
Mauro
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Including images on Sphinx documents

2016-11-13 Thread Jonathan Corbet
On Mon, 7 Nov 2016 09:46:48 -0200
Mauro Carvalho Chehab <mche...@s-opensource.com> wrote:

> When running LaTeX in interactive mode, building just the media
> PDF file with:
> 
>   $ cls;make cleandocs; make SPHINXOPTS="-j5" DOCBOOKS="" 
> SPHINXDIRS=media latexdocs 
>   $ PDFLATEX=xelatex LATEXOPTS="-interaction=interactive" -C 
> Documentation/output/media/latex
> 
> I get this:
> 
>   LaTeX Warning: Hyper reference `uapi/v4l/subdev-formats:bayer-patterns' 
> on page
>153 undefined on input line 21373.
> 
>[153]
>   ! Extra alignment tab has been changed to \cr.
>\endtemplate 
> 
>   l.21429 \unskip}\relax \unskip}
>  \relax \\
>   ? 
> 
> This patch fixes the issue:
>   
> https://git.linuxtv.org/mchehab/experimental.git/commit/?h=dirty-pdf=b709de415f34d77cc121cad95bece9c7ef4d12fd
> 
> That means that Sphinx is not generating the right LaTeX output even for
> (some?) PNG images.

So I'm seriously confused.

I can get that particular message - TeX is complaining about too many
columns in the table.  But applying your patch (with a suitable bayer.pdf
provided) does not fix the problem.  Indeed, I can remove the figure with
the image entirely and still not fix the problem.  Are you sure that the
patch linked here actually fixed it for you?

jon
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Including images on Sphinx documents

2016-11-13 Thread Jonathan Corbet
On Mon, 7 Nov 2016 07:55:24 -0200
Mauro Carvalho Chehab <mche...@s-opensource.com> wrote:

> So, we have a few alternatives:
> 
> 1) copy (or symlink) all rst files to Documentation/output (or to the
>build dir specified via O= directive) and generate the *.pdf there,
>and produce those converted images via Makefile.;
> 
> 2) add an Sphinx extension that would internally call ImageMagick and/or
>inkscape to convert the bitmap;
> 
> 3) if possible, add an extension to trick Sphinx for it to consider the 
>output dir as a source dir too.

So, obviously, I've been letting this go by while dealing with other
stuff...

I really think that 2) is the one we want.  Copying all the stuff and
operating on the copies, beyond being a bit of a hack, just seems like a
recipe for weird build problems in the future.

We should figure out why PNG files don't work.  Maybe I'll give that a
try at some point soon, if I can find a moment.  Working around tools
bugs seems like the wrong approach.

Working from .svg seems optimial, but I don't like the --shell-escape
thing at all.

[Along those lines, we've picked up a lot of lines like this:

 restricted \write18 enabled.

That, too, is shell execution stuff.  I've not been able to figure out
where it came from, but I would sure like to get rid of it...]

jon
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-11 Thread Markus Heiser

Am 11.11.2016 um 12:22 schrieb Jani Nikula :

> On Thu, 10 Nov 2016, Jani Nikula  wrote:
>> On Thu, 10 Nov 2016, Markus Heiser  wrote:
>>> Could this POC persuade you, if so, I send a more elaborate RFC,
>>> what do you think about?
>> 
>> Sorry, I do not wish to be part of this.
> 
> That was uncalled for, apologies.

It's OK, sometimes we are all in a hurry and want shorten things.

> Like I said, I don't think this is the right approach. Call it an
> unsubstantiated gut feel coming from experience.

Yes, building a bunch of symbolic links "smells". Unfortunately,
I currently see no other solution to solve the conflict of 
Linux's "O=/foo" and Sphinx's "sourcedir", so that was my
proposal.

> However, I do not have
> the time to properly dig into this either, and that frustrates me. I
> wish I could be more helpful, but I can't right now.

its a pity

-- Markus --
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-11 Thread Jani Nikula
On Thu, 10 Nov 2016, Jani Nikula  wrote:
> On Thu, 10 Nov 2016, Markus Heiser  wrote:
>> Could this POC persuade you, if so, I send a more elaborate RFC,
>> what do you think about?
>
> Sorry, I do not wish to be part of this.

That was uncalled for, apologies.

Like I said, I don't think this is the right approach. Call it an
unsubstantiated gut feel coming from experience. However, I do not have
the time to properly dig into this either, and that frustrates me. I
wish I could be more helpful, but I can't right now.

BR,
Jani.


-- 
Jani Nikula, Intel Open Source Technology Center
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-11 Thread Mauro Carvalho Chehab
Em Wed, 09 Nov 2016 13:58:12 +0200
Jani Nikula <jani.nik...@linux.intel.com> escreveu:

> On Wed, 09 Nov 2016, Markus Heiser <markus.hei...@darmarit.de> wrote:
> > Am 09.11.2016 um 12:16 schrieb Jani Nikula <jani.nik...@linux.intel.com>:  
> >>> So I vote for :
> >>>   
> >>>> 1) copy (or symlink) all rst files to Documentation/output (or to the
> >>>> build dir specified via O= directive) and generate the *.pdf there,
> >>>> and produce those converted images via Makefile.;  
> >> 
> >> We're supposed to solve problems, not create new ones.  
> >
> > ... new ones? ...  
> 
> Handle in-tree builds without copying.
> 
> Make dependency analysis with source rst and "intermediate" rst work.
> 
> Make sure your copying gets the timestamps right.
> 
> Make Sphinx dependency analysis look at the right copies depending on
> in-tree vs. out-of-tree. Generally make sure it doesn't confuse Sphinx's
> own dependency analysis.

I agree with Jani here: copy the files will make Sphinx recompile
the entire documentation every time, with is bad. Ok, Some Makefile
logic could be added to copy only on changes, but that will increase
the Makefile complexity.

So, I prefer not using copy. As I said before, a Sphinx extension that
would make transparent for PDF document generation when a non-PDF image
is included, doing whatever conversion needed, seems to be the right fix 
here, but someone would need to step up and write such extension.

-- 

Cheers,
Mauro
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-10 Thread Jani Nikula
On Thu, 10 Nov 2016, Markus Heiser  wrote:
> Could this POC persuade you, if so, I send a more elaborate RFC,
> what do you think about?

Sorry, I do not wish to be part of this.

BR,
Jani.

-- 
Jani Nikula, Intel Open Source Technology Center
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-09 Thread Markus Heiser


On 09.11.2016 12:58, Jani Nikula wrote:
> On Wed, 09 Nov 2016, Markus Heiser <markus.hei...@darmarit.de> wrote:
>> Am 09.11.2016 um 12:16 schrieb Jani Nikula <jani.nik...@linux.intel.com>:
>>>> So I vote for :
>>>>
>>>>> 1) copy (or symlink) all rst files to Documentation/output (or to the
>>>>> build dir specified via O= directive) and generate the *.pdf there,
>>>>> and produce those converted images via Makefile.;
>>>
>>> We're supposed to solve problems, not create new ones.
>>
>> ... new ones? ...
>
> Handle in-tree builds without copying.
>
> Make dependency analysis with source rst and "intermediate" rst work.
>
> Make sure your copying gets the timestamps right.
>
> Make Sphinx dependency analysis look at the right copies depending on
> in-tree vs. out-of-tree. Generally make sure it doesn't confuse Sphinx's
> own dependency analysis.
>
> The stuff I didn't think of.

It might be easier than you think first.

> Sure, it's all supposed to be basic Makefile stuff, but don't make the mistake
> of thinking just one invocation of 'cp' will solve all the problems.

I act naif using 'cp -sa', see patch below.

> It all adds to the complexity we were trying to avoid when dumping DocBook. It
> adds to the complexity of debugging stuff. (And hey, there's still the one
> rebuilding-stuff-for-no-reason issue open.)

And hey ;-) I wrote you [1], this is a bug in Sphinx. Yes, I haven't had time
to send a bugfix to Sphinx, but this won't even help us (bugfixes in Sphinx will
only apply on top).

> If you want to keep the documentation build sane, try to avoid the Makefile
> preprocessing.

I'am just the one helping Mauro to be productive, if he needs preprocessing I
implement proposals. I know that you fear preprocessing since it tend to
fall-back, what we had with DocBook's build process.  We discussed this already,
it might better you unify this with Mauro and the other who need preprocessing.

> And same old story, if you fix this for real, even if as a Sphinx extension,
> *other* people than kernel developers will be interested, and *we* don't have
> to do so much ourselves.

I don't think so, this kind of parsing header files we have and the build of
content from MAINTAINERS, ABI, etc. is very kernel specific.

Anyway, back to my point 'copy (or symlink) all rst files'. Please take a look
at my patch below. Take in mind; its just a POC.

Could this POC persuade you, if so, I send a more elaborate RFC,
what do you think about?

[1] https://www.mail-archive.com/linux-doc@vger.kernel.org/msg07302.html

-- Markus --

> BR,
> Jani.
>>
>>>> IMO placing 'sourcedir' to O= is more sane since this marries the
>>>> Linux Makefile concept (relative to $PWD) with the sphinx concept
>>>> (in or below 'sourcedir').
>>
>> -- Markus --
>

diff --git a/Documentation/Makefile.sphinx b/Documentation/Makefile.sphinx
index ec0c77d..8e904c1 100644
--- a/Documentation/Makefile.sphinx
+++ b/Documentation/Makefile.sphinx
@@ -13,6 +13,10 @@ BUILDDIR  = $(obj)/output
 PDFLATEX  = xelatex
 LATEXOPTS = -interaction=batchmode

+ifdef SPHINXDIRS
+else
+endif
+
 # User-friendly check for sphinx-build
 HAVE_SPHINX := $(shell if which $(SPHINXBUILD) >/dev/null 2>&1; then echo 1; 
else echo 0; fi)

@@ -50,30 +54,38 @@ loop_cmd = $(echo-cmd) $(cmd_$(1))
 #* dest folder relative to $(BUILDDIR) and
 #* cache folder relative to $(BUILDDIR)/.doctrees
 # $4 dest subfolder e.g. "man" for man pages at media/man
-# $5 reST source folder relative to $(srctree)/$(src),
+# $5 reST source folder relative to $(obj),
 #e.g. "media" for the linux-tv book-set at ./Documentation/media

 quiet_cmd_sphinx = SPHINX  $@ --> file://$(abspath $(BUILDDIR)/$3/$4)
-  cmd_sphinx = $(MAKE) BUILDDIR=$(abspath $(BUILDDIR)) 
$(build)=Documentation/media all;\
-   BUILDDIR=$(abspath $(BUILDDIR)) SPHINX_CONF=$(abspath 
$(srctree)/$(src)/$5/$(SPHINX_CONF)) \
+  cmd_sphinx = $(MAKE) BUILDDIR=$(BUILDDIR) $(build)=Documentation/media 
all;\
+   BUILDDIR=$(BUILDDIR) SPHINX_CONF=$(obj)/$5/$(SPHINX_CONF) \
$(SPHINXBUILD) \
-b $2 \
-   -c $(abspath $(srctree)/$(src)) \
-   -d $(abspath $(BUILDDIR)/.doctrees/$3) \
+   -c $(obj) \
+   -d $(obj)/.doctrees/$3 \
-D version=$(KERNELVERSION) -D release=$(KERNELRELEASE) \
$(ALLSPHINXOPTS) \
-   $(abspath $(srctree)/$(src)/$5) \
-   $(abspath $(BUILDDIR)/$3/$4);
-
-htmldocs:
+   $(obj)/$5 \
+   $(BUILDDIR)/$3/$4;
+
+ifdef O
+sync:
+   rm -rf $(objtree)/$(obj)
+   cp -sa $(srctree)/$(obj) $(objtree)
+else
+sync:
+endif
+
+htmldocs: sync
@$(foreach var,$(SPHINXDIRS),$(call 
loop_cmd,sphinx,html,$(var),,$(var)))

-latexdocs:
+latexdoc

Re: Including images on Sphinx documents

2016-11-09 Thread Mauro Carvalho Chehab
Em Mon, 07 Nov 2016 12:53:55 +0200
Jani Nikula <jani.nik...@intel.com> escreveu:

> On Mon, 07 Nov 2016, Mauro Carvalho Chehab <mche...@s-opensource.com> wrote:
> > Hi Jon,
> >
> > I'm trying to sort out the next steps to do after KS, with regards to
> > images included on RST files.
> >
> > The issue is that Sphinx image support highly depends on the output
> > format. Also, despite TexLive support for svg and png images[1], Sphinx
> > doesn't produce the right LaTeX commands to use svg[2]. On my tests
> > with PNG on my notebook, it also didn't seem to do the right thing for
> > PNG either. So, it seems that the only safe way to support images is
> > to convert all of them to PDF for latex/pdf build.
> >
> > [1] On Fedora, via texlive-dvipng and texlive-svg
> > [2] https://github.com/sphinx-doc/sphinx/issues/1907
> >
> > As far as I understand from KS, two decisions was taken:
> >
> > - We're not adding a sphinx extension to run generic commands;
> > - The PDF images should be build in runtime from their source files
> >   (either svg or bitmap), and not ship anymore the corresponding
> >   PDF files generated from its source.
> >
> > As you know, we use several images at the media documentation:
> > https://www.kernel.org/doc/html/latest/_images/
> >
> > Those images are tightly coupled with the explanation texts. So,
> > maintaining them away from the documentation is not an option.
> >
> > I was originally thinking that adding a graphviz extension would solve the
> > issue, but, in fact, most of the images aren't diagrams. Instead, there are 
> > several ones with images showing the result of passing certain parameters to
> > the ioctls, explaining things like scale and cropping and how bytes are
> > packed on some image formats.
> >
> > Linus proposed to call some image conversion tool like ImageMagick or
> > inkscape to convert them to PDF when building the pdfdocs or latexdocs
> > target at Makefile, but there's an issue with that: Sphinx doesn't read
> > files from Documentation/output, and writing them directly at the
> > source dir would be against what it is expected when the "O=" argument
> > is passed to make. 
> >
> > So, we have a few alternatives:
> >
> > 1) copy (or symlink) all rst files to Documentation/output (or to the
> >build dir specified via O= directive) and generate the *.pdf there,
> >and produce those converted images via Makefile.;
> >
> > 2) add an Sphinx extension that would internally call ImageMagick and/or
> >inkscape to convert the bitmap;
> >
> > 3) if possible, add an extension to trick Sphinx for it to consider the 
> >output dir as a source dir too.  
> 
> Looking at the available extensions, and the images to be displayed,
> seems to me making svg work, somehow, is the right approach. (As opposed
> to trying to represent the images in graphviz or whatnot.)

I guess answered this one already, but it got lost somehow...

The problem is not just with svg. Sphinx also do the wrong thing with
PNG, despite apparently generating the right LaTeX image include command.

> IIUC texlive supports displaying svg directly, but the problem is that
> Sphinx produces bad latex for that. Can we make it work by manually
> writing the latex? If yes, we wouldn't need to use an external tool to
> convert the svg to something else, but rather fix the latex. Thus:
> 
> 4a) See if this works:
> 
> .. only:: html
> 
>.. image:: foo.svg

We're currently using .. figure:: instead, as it allow optional caption
and legend, but I got the idea.

> .. raw:: latex
> 
>

That is a horrible hack, and will lose other attributes at
image:: (or figure::), like :align:

Also, it won't solve, as the images will need to be copied to the
build dir via Makefile, as Spinx only copies the images it recognizes.

So, in practice, the only difference is that Makefile would be calling
"cp" instead of "convert", plus we'll have to hack all ReST sources.

> 4b) Add a directive extension to make the above happen automatically.

If doable, I agree that this is the best solution. Any volunteers to write
such extension?

> Of course, the correct fix is to have this fixed in upstream Sphinx, but
> as a workaround an extension doing the above seems plausible, and not
> too much effort - provided that we can make the raw latex work.

Yeah, fixing it on Sphinx upstream would be the best, but we'll still
need to maintain the workaround for a while for the unpatched versions
of Sphinx.

Thanks,
Mauro
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-09 Thread Jani Nikula
On Wed, 09 Nov 2016, Markus Heiser <markus.hei...@darmarit.de> wrote:
> Am 09.11.2016 um 12:16 schrieb Jani Nikula <jani.nik...@linux.intel.com>:
>>> So I vote for :
>>> 
>>>> 1) copy (or symlink) all rst files to Documentation/output (or to the
>>>> build dir specified via O= directive) and generate the *.pdf there,
>>>> and produce those converted images via Makefile.;
>> 
>> We're supposed to solve problems, not create new ones.
>
> ... new ones? ...

Handle in-tree builds without copying.

Make dependency analysis with source rst and "intermediate" rst work.

Make sure your copying gets the timestamps right.

Make Sphinx dependency analysis look at the right copies depending on
in-tree vs. out-of-tree. Generally make sure it doesn't confuse Sphinx's
own dependency analysis.

The stuff I didn't think of.

Sure, it's all supposed to be basic Makefile stuff, but don't make the
mistake of thinking just one invocation of 'cp' will solve all the
problems. It all adds to the complexity we were trying to avoid when
dumping DocBook. It adds to the complexity of debugging stuff. (And hey,
there's still the one rebuilding-stuff-for-no-reason issue open.)

If you want to keep the documentation build sane, try to avoid the
Makefile preprocessing. And same old story, if you fix this for real,
even if as a Sphinx extension, *other* people than kernel developers
will be interested, and *we* don't have to do so much ourselves.


BR,
Jani.



>
>>> IMO placing 'sourcedir' to O= is more sane since this marries the
>>> Linux Makefile concept (relative to $PWD) with the sphinx concept
>>> (in or below 'sourcedir').
>
> -- Markus --

-- 
Jani Nikula, Intel Open Source Technology Center
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-09 Thread Jani Nikula
On Wed, 09 Nov 2016, Mauro Carvalho Chehab <mche...@s-opensource.com> wrote:
> Em Wed, 09 Nov 2016 13:16:55 +0200
> Jani Nikula <jani.nik...@linux.intel.com> escreveu:
>
>> >> 1) copy (or symlink) all rst files to Documentation/output (or to the
>> >>  build dir specified via O= directive) and generate the *.pdf there,
>> >>  and produce those converted images via Makefile.;  
>> 
>> We're supposed to solve problems, not create new ones.
>
> So, what's your proposal?

Second message in the thread,
http://lkml.kernel.org/r/87wpgf8ssc@intel.com

>
> Thanks,
> Mauro
> ___
> Ksummit-discuss mailing list
> ksummit-disc...@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss

-- 
Jani Nikula, Intel Open Source Technology Center
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-09 Thread Mauro Carvalho Chehab
Em Wed, 09 Nov 2016 13:16:55 +0200
Jani Nikula <jani.nik...@linux.intel.com> escreveu:

> >> 1) copy (or symlink) all rst files to Documentation/output (or to the
> >>  build dir specified via O= directive) and generate the *.pdf there,
> >>  and produce those converted images via Makefile.;  
> 
> We're supposed to solve problems, not create new ones.

So, what's your proposal?

Thanks,
Mauro
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-09 Thread Markus Heiser

Am 09.11.2016 um 12:16 schrieb Jani Nikula <jani.nik...@linux.intel.com>:
>> So I vote for :
>> 
>>> 1) copy (or symlink) all rst files to Documentation/output (or to the
>>> build dir specified via O= directive) and generate the *.pdf there,
>>> and produce those converted images via Makefile.;
> 
> We're supposed to solve problems, not create new ones.

... new ones? ...

>> IMO placing 'sourcedir' to O= is more sane since this marries the
>> Linux Makefile concept (relative to $PWD) with the sphinx concept
>> (in or below 'sourcedir').

-- Markus --
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-09 Thread Jani Nikula
On Wed, 09 Nov 2016, Markus Heiser <markus.hei...@darmarit.de> wrote:
> Am 07.11.2016 um 18:01 schrieb Josh Triplett <j...@joshtriplett.org>:
>
>> On Mon, Nov 07, 2016 at 07:55:24AM -0200, Mauro Carvalho Chehab wrote:
>>> 2) add an Sphinx extension that would internally call ImageMagick and/or
>>>  inkscape to convert the bitmap;
>> 
>> This seems sensible; Sphinx should directly handle the source format we
>> want to use for images/diagrams.
>> 
>>> 3) if possible, add an extension to trick Sphinx for it to consider the 
>>>  output dir as a source dir too.
>> 
>> Or to provide an additional source path and point that at the output
>> directory.
>
> The sphinx-build command excepts only one 'sourcedir' argument. All
> reST files in this folder (and below) are parsed.
>
> Most (all?) directives which include content like images or literalinclude
> except only relative pathnames. Where *relative* means, relative to the
> reST file where the directive is used. For security reasons relative 
> pathnames outside 'sourcepath' are not excepted.
>
> So I vote for :
>
>> 1) copy (or symlink) all rst files to Documentation/output (or to the
>>  build dir specified via O= directive) and generate the *.pdf there,
>>  and produce those converted images via Makefile.;

We're supposed to solve problems, not create new ones.

> Placing reST files together with the *autogenerated* (intermediate) 
> content from
>
> * image conversions,
> * reST content build from MAINTAINERS,
> * reST content build for ABI
> * etc.
>
> has the nice side effect, that we can get rid of all theses BUILDDIR
> quirks in the Makefile.sphinx
>
> Additional, we can write Makefile targets to build the above listed
> intermediate content relative to the $PWD, which is what Linux's
> Makefiles usual do (instead of quirking with a BUILDDIR).
>
> E.g. with, we can also get rid of the 'kernel-include' directive 
> and replace it, with Sphinx's common 'literaliclude' and we do not
> need any extensions to include intermediate PDFs or whatever
> intermediate content we might want to generate. 

Well, kernel-include is a hack to make parse-headers.pl work, which is
also a hack that IMHO shouldn't exist...

> IMO placing 'sourcedir' to O= is more sane since this marries the
> Linux Makefile concept (relative to $PWD) with the sphinx concept
> (in or below 'sourcedir').
>
>
> -- Markus --
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-doc" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
Jani Nikula, Intel Open Source Technology Center
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-09 Thread Markus Heiser

Am 07.11.2016 um 18:01 schrieb Josh Triplett <j...@joshtriplett.org>:

> On Mon, Nov 07, 2016 at 07:55:24AM -0200, Mauro Carvalho Chehab wrote:
>> 2) add an Sphinx extension that would internally call ImageMagick and/or
>>  inkscape to convert the bitmap;
> 
> This seems sensible; Sphinx should directly handle the source format we
> want to use for images/diagrams.
> 
>> 3) if possible, add an extension to trick Sphinx for it to consider the 
>>  output dir as a source dir too.
> 
> Or to provide an additional source path and point that at the output
> directory.

The sphinx-build command excepts only one 'sourcedir' argument. All
reST files in this folder (and below) are parsed.

Most (all?) directives which include content like images or literalinclude
except only relative pathnames. Where *relative* means, relative to the
reST file where the directive is used. For security reasons relative 
pathnames outside 'sourcepath' are not excepted.

So I vote for :

> 1) copy (or symlink) all rst files to Documentation/output (or to the
>  build dir specified via O= directive) and generate the *.pdf there,
>  and produce those converted images via Makefile.;

Placing reST files together with the *autogenerated* (intermediate) 
content from

* image conversions,
* reST content build from MAINTAINERS,
* reST content build for ABI
* etc.

has the nice side effect, that we can get rid of all theses BUILDDIR
quirks in the Makefile.sphinx

Additional, we can write Makefile targets to build the above listed
intermediate content relative to the $PWD, which is what Linux's
Makefiles usual do (instead of quirking with a BUILDDIR).

E.g. with, we can also get rid of the 'kernel-include' directive 
and replace it, with Sphinx's common 'literaliclude' and we do not
need any extensions to include intermediate PDFs or whatever
intermediate content we might want to generate. 

IMO placing 'sourcedir' to O= is more sane since this marries the
Linux Makefile concept (relative to $PWD) with the sphinx concept
(in or below 'sourcedir').


-- Markus --


--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-08 Thread Mauro Carvalho Chehab
Em Mon, 7 Nov 2016 09:05:05 -0800
Josh Triplett <j...@joshtriplett.org> escreveu:

> On Mon, Nov 07, 2016 at 09:46:48AM -0200, Mauro Carvalho Chehab wrote:
> > That's said, PNG also doesn't seem to work fine on Sphinx 1.4.x.
> > 
> > On my tests, I installed *all* texlive extensions on Fedora 24, to
> > be sure that the issue is not the lack of some extension[1], with:
> > 
> > # dnf install $(sudo dnf search texlive |grep all|cut -d. -f 1|grep 
> > texlive-)
> > 
> > When running LaTeX in interactive mode, building just the media
> > PDF file with:
> > 
> > $ cls;make cleandocs; make SPHINXOPTS="-j5" DOCBOOKS="" 
> > SPHINXDIRS=media latexdocs 
> > $ PDFLATEX=xelatex LATEXOPTS="-interaction=interactive" -C 
> > Documentation/output/media/latex
> > 
> > I get this:
> > 
> > LaTeX Warning: Hyper reference `uapi/v4l/subdev-formats:bayer-patterns' 
> > on page
> >  153 undefined on input line 21373.
> > 
> >  [153]
> > ! Extra alignment tab has been changed to \cr.
> >  \endtemplate 
> > 
> > l.21429 \unskip}\relax \unskip}
> >\relax \\
> > ? 
> > 
> > This patch fixes the issue:
> > 
> > https://git.linuxtv.org/mchehab/experimental.git/commit/?h=dirty-pdf=b709de415f34d77cc121cad95bece9c7ef4d12fd
> > 
> > That means that Sphinx is not generating the right LaTeX output even for
> > (some?) PNG images.  
> 
> \includegraphics normally works just fine for PNG images in PDF
> documents.

I didn't try to fix the Sphinx output in LaTeX format when a PNG
image is used, but, from the above log, clearly it did something
wrong. Perhaps there's something bad defined at the sphinx.sty file,
or it simply generates the wrong LaTeX code when the image
is not on PDF format.

> [...]
> > And it may even require "--shell-escape" to be passed at the xelatex
> > call if inkscape is not in the path, with seems to be a strong
> > indication that SVG support is not native to texlive, but, instead,
> > just a way to make LaTeX to call inkscape to do the image conversion.  
> 
> Please don't require --shell-escape as part of the TeX workflow.  If
> LaTeX can't handle the desired image format natively, it needs
> conversion in advance.

Agreed. I sent a patch series to linux-doc, doing the conversion in
advance:
https://marc.info/?l=linux-doc=147859902804144=2

Not sure why, but the archives don't have all patches yet.
Anyway, the relevant one is this:

https://git.linuxtv.org/mchehab/experimental.git/commit/?h=pdf-fixes=5d41c452c787f6a6c755a3855312435bc439acb8

It basically calls ImageMagick "convert" tool for all png and
pdf files currently at the documentation (they're all at media,
ATM).


Thanks,
Mauro
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-07 Thread Josh Triplett
On Mon, Nov 07, 2016 at 07:55:24AM -0200, Mauro Carvalho Chehab wrote:
> 2) add an Sphinx extension that would internally call ImageMagick and/or
>inkscape to convert the bitmap;

This seems sensible; Sphinx should directly handle the source format we
want to use for images/diagrams.

> 3) if possible, add an extension to trick Sphinx for it to consider the 
>output dir as a source dir too.

Or to provide an additional source path and point that at the output
directory.
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Ksummit-discuss] Including images on Sphinx documents

2016-11-07 Thread Josh Triplett
On Mon, Nov 07, 2016 at 09:46:48AM -0200, Mauro Carvalho Chehab wrote:
> That's said, PNG also doesn't seem to work fine on Sphinx 1.4.x.
> 
> On my tests, I installed *all* texlive extensions on Fedora 24, to
> be sure that the issue is not the lack of some extension[1], with:
> 
>   # dnf install $(sudo dnf search texlive |grep all|cut -d. -f 1|grep 
> texlive-)
> 
> When running LaTeX in interactive mode, building just the media
> PDF file with:
> 
>   $ cls;make cleandocs; make SPHINXOPTS="-j5" DOCBOOKS="" 
> SPHINXDIRS=media latexdocs 
>   $ PDFLATEX=xelatex LATEXOPTS="-interaction=interactive" -C 
> Documentation/output/media/latex
> 
> I get this:
> 
>   LaTeX Warning: Hyper reference `uapi/v4l/subdev-formats:bayer-patterns' 
> on page
>153 undefined on input line 21373.
> 
>[153]
>   ! Extra alignment tab has been changed to \cr.
>\endtemplate 
> 
>   l.21429 \unskip}\relax \unskip}
>  \relax \\
>   ? 
> 
> This patch fixes the issue:
>   
> https://git.linuxtv.org/mchehab/experimental.git/commit/?h=dirty-pdf=b709de415f34d77cc121cad95bece9c7ef4d12fd
> 
> That means that Sphinx is not generating the right LaTeX output even for
> (some?) PNG images.

\includegraphics normally works just fine for PNG images in PDF
documents.

[...]
> And it may even require "--shell-escape" to be passed at the xelatex
> call if inkscape is not in the path, with seems to be a strong
> indication that SVG support is not native to texlive, but, instead,
> just a way to make LaTeX to call inkscape to do the image conversion.

Please don't require --shell-escape as part of the TeX workflow.  If
LaTeX can't handle the desired image format natively, it needs
conversion in advance.

- Josh Triplett
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Including images on Sphinx documents

2016-11-07 Thread Mauro Carvalho Chehab
Em Mon, 07 Nov 2016 12:53:55 +0200
Jani Nikula <jani.nik...@intel.com> escreveu:

> On Mon, 07 Nov 2016, Mauro Carvalho Chehab <mche...@s-opensource.com> wrote:
> > Hi Jon,
> >
> > I'm trying to sort out the next steps to do after KS, with regards to
> > images included on RST files.
> >
> > The issue is that Sphinx image support highly depends on the output
> > format. Also, despite TexLive support for svg and png images[1], Sphinx
> > doesn't produce the right LaTeX commands to use svg[2]. On my tests
> > with PNG on my notebook, it also didn't seem to do the right thing for
> > PNG either. So, it seems that the only safe way to support images is
> > to convert all of them to PDF for latex/pdf build.
> >
> > [1] On Fedora, via texlive-dvipng and texlive-svg
> > [2] https://github.com/sphinx-doc/sphinx/issues/1907
> >
> > As far as I understand from KS, two decisions was taken:
> >
> > - We're not adding a sphinx extension to run generic commands;
> > - The PDF images should be build in runtime from their source files
> >   (either svg or bitmap), and not ship anymore the corresponding
> >   PDF files generated from its source.
> >
> > As you know, we use several images at the media documentation:
> > https://www.kernel.org/doc/html/latest/_images/
> >
> > Those images are tightly coupled with the explanation texts. So,
> > maintaining them away from the documentation is not an option.
> >
> > I was originally thinking that adding a graphviz extension would solve the
> > issue, but, in fact, most of the images aren't diagrams. Instead, there are 
> > several ones with images showing the result of passing certain parameters to
> > the ioctls, explaining things like scale and cropping and how bytes are
> > packed on some image formats.
> >
> > Linus proposed to call some image conversion tool like ImageMagick or
> > inkscape to convert them to PDF when building the pdfdocs or latexdocs
> > target at Makefile, but there's an issue with that: Sphinx doesn't read
> > files from Documentation/output, and writing them directly at the
> > source dir would be against what it is expected when the "O=" argument
> > is passed to make. 
> >
> > So, we have a few alternatives:
> >
> > 1) copy (or symlink) all rst files to Documentation/output (or to the
> >build dir specified via O= directive) and generate the *.pdf there,
> >and produce those converted images via Makefile.;
> >
> > 2) add an Sphinx extension that would internally call ImageMagick and/or
> >inkscape to convert the bitmap;
> >
> > 3) if possible, add an extension to trick Sphinx for it to consider the 
> >output dir as a source dir too.  
> 
> Looking at the available extensions, and the images to be displayed,
> seems to me making svg work, somehow, is the right approach. (As opposed
> to trying to represent the images in graphviz or whatnot.)
> 
> IIUC texlive supports displaying svg directly, but the problem is that
> Sphinx produces bad latex for that. Can we make it work by manually
> writing the latex?

It might be possible, if we write something at the LaTeX preamble
that would replace \includegraphics by something that would, instead,
use \includesvg, if the image is in SVG format. However, I don't know
enough about LaTeX to write such macro.

> If yes, we wouldn't need to use an external tool to
> convert the svg to something else, but rather fix the latex. Thus:
> 
> 4a) See if this works:
> 
> .. only:: html
> 
>.. image:: foo.svg
> 
> .. raw:: latex
> 
>

This may work, although it would prevent forever the usage of some
extension to auto-numerate images and to cross-reference them.

That's said, PNG also doesn't seem to work fine on Sphinx 1.4.x.

On my tests, I installed *all* texlive extensions on Fedora 24, to
be sure that the issue is not the lack of some extension[1], with:

# dnf install $(sudo dnf search texlive |grep all|cut -d. -f 1|grep 
texlive-)

When running LaTeX in interactive mode, building just the media
PDF file with:

$ cls;make cleandocs; make SPHINXOPTS="-j5" DOCBOOKS="" 
SPHINXDIRS=media latexdocs 
$ PDFLATEX=xelatex LATEXOPTS="-interaction=interactive" -C 
Documentation/output/media/latex

I get this:

LaTeX Warning: Hyper reference `uapi/v4l/subdev-formats:bayer-patterns' 
on page
 153 undefined on input line 21373.

 [153]
! Extra alignment tab has been changed to \cr.
 \endtemplate 

l.21429 \unskip}\relax \unskip}
   \relax \\
   

Re: Including images on Sphinx documents

2016-11-07 Thread Jani Nikula
On Mon, 07 Nov 2016, Mauro Carvalho Chehab <mche...@s-opensource.com> wrote:
> Hi Jon,
>
> I'm trying to sort out the next steps to do after KS, with regards to
> images included on RST files.
>
> The issue is that Sphinx image support highly depends on the output
> format. Also, despite TexLive support for svg and png images[1], Sphinx
> doesn't produce the right LaTeX commands to use svg[2]. On my tests
> with PNG on my notebook, it also didn't seem to do the right thing for
> PNG either. So, it seems that the only safe way to support images is
> to convert all of them to PDF for latex/pdf build.
>
> [1] On Fedora, via texlive-dvipng and texlive-svg
> [2] https://github.com/sphinx-doc/sphinx/issues/1907
>
> As far as I understand from KS, two decisions was taken:
>
> - We're not adding a sphinx extension to run generic commands;
> - The PDF images should be build in runtime from their source files
>   (either svg or bitmap), and not ship anymore the corresponding
>   PDF files generated from its source.
>
> As you know, we use several images at the media documentation:
>   https://www.kernel.org/doc/html/latest/_images/
>
> Those images are tightly coupled with the explanation texts. So,
> maintaining them away from the documentation is not an option.
>
> I was originally thinking that adding a graphviz extension would solve the
> issue, but, in fact, most of the images aren't diagrams. Instead, there are 
> several ones with images showing the result of passing certain parameters to
> the ioctls, explaining things like scale and cropping and how bytes are
> packed on some image formats.
>
> Linus proposed to call some image conversion tool like ImageMagick or
> inkscape to convert them to PDF when building the pdfdocs or latexdocs
> target at Makefile, but there's an issue with that: Sphinx doesn't read
> files from Documentation/output, and writing them directly at the
> source dir would be against what it is expected when the "O=" argument
> is passed to make. 
>
> So, we have a few alternatives:
>
> 1) copy (or symlink) all rst files to Documentation/output (or to the
>build dir specified via O= directive) and generate the *.pdf there,
>and produce those converted images via Makefile.;
>
> 2) add an Sphinx extension that would internally call ImageMagick and/or
>inkscape to convert the bitmap;
>
> 3) if possible, add an extension to trick Sphinx for it to consider the 
>output dir as a source dir too.

Looking at the available extensions, and the images to be displayed,
seems to me making svg work, somehow, is the right approach. (As opposed
to trying to represent the images in graphviz or whatnot.)

IIUC texlive supports displaying svg directly, but the problem is that
Sphinx produces bad latex for that. Can we make it work by manually
writing the latex? If yes, we wouldn't need to use an external tool to
convert the svg to something else, but rather fix the latex. Thus:

4a) See if this works:

.. only:: html

   .. image:: foo.svg

.. raw:: latex

   

4b) Add a directive extension to make the above happen automatically.

Of course, the correct fix is to have this fixed in upstream Sphinx, but
as a workaround an extension doing the above seems plausible, and not
too much effort - provided that we can make the raw latex work.

BR,
Jani.


-- 
Jani Nikula, Intel Open Source Technology Center
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Including images on Sphinx documents

2016-11-07 Thread Mauro Carvalho Chehab
Hi Jon,

I'm trying to sort out the next steps to do after KS, with regards to
images included on RST files.

The issue is that Sphinx image support highly depends on the output
format. Also, despite TexLive support for svg and png images[1], Sphinx
doesn't produce the right LaTeX commands to use svg[2]. On my tests
with PNG on my notebook, it also didn't seem to do the right thing for
PNG either. So, it seems that the only safe way to support images is
to convert all of them to PDF for latex/pdf build.

[1] On Fedora, via texlive-dvipng and texlive-svg
[2] https://github.com/sphinx-doc/sphinx/issues/1907

As far as I understand from KS, two decisions was taken:

- We're not adding a sphinx extension to run generic commands;
- The PDF images should be build in runtime from their source files
  (either svg or bitmap), and not ship anymore the corresponding
  PDF files generated from its source.

As you know, we use several images at the media documentation:
https://www.kernel.org/doc/html/latest/_images/

Those images are tightly coupled with the explanation texts. So,
maintaining them away from the documentation is not an option.

I was originally thinking that adding a graphviz extension would solve the
issue, but, in fact, most of the images aren't diagrams. Instead, there are 
several ones with images showing the result of passing certain parameters to
the ioctls, explaining things like scale and cropping and how bytes are
packed on some image formats.

Linus proposed to call some image conversion tool like ImageMagick or
inkscape to convert them to PDF when building the pdfdocs or latexdocs
target at Makefile, but there's an issue with that: Sphinx doesn't read
files from Documentation/output, and writing them directly at the
source dir would be against what it is expected when the "O=" argument
is passed to make. 

So, we have a few alternatives:

1) copy (or symlink) all rst files to Documentation/output (or to the
   build dir specified via O= directive) and generate the *.pdf there,
   and produce those converted images via Makefile.;

2) add an Sphinx extension that would internally call ImageMagick and/or
   inkscape to convert the bitmap;

3) if possible, add an extension to trick Sphinx for it to consider the 
   output dir as a source dir too.

Comments?

Regards,
Mauro
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Still images and v4l2?

2016-10-28 Thread Nicolas Dufresne
Le vendredi 28 octobre 2016 à 14:14 +0200, Michael Haardt a écrit :
> I am currently developing a new image v4l2 sensor driver to acquire
> sequences of still images and wonder how to interface that to the
> v4l2 API.
> 
> Currently, cameras are assumed to deliver an endless stream of images
> after being triggered internally with VIDIOC_STREAMON.  If supported
> by
> the driver, a certain frame rate is used.
> 
> For precise image capturing, I need two additional features:
> 
> Limiting the number of captured images: It is desirable not having to
> stop
> streaming from user space for camera latency.  A typical application
> are single shots at random times, and possibly with little time in
> between the end of one image and start of a new one, so an image that
> could not be stopped in time would be a problem.  A video camera
> would
> only support the limit value "unlimited" as possible capturing limit.
> Scientific cameras may offer more, or possibly only limited
> capturing.
> 
> Configuring the capture trigger: Right now sensors are implicitly
> triggered internally from the driver.  Being able to configure
> external
> triggers, which many sensors support, is needed to start capturing at
> exactly the right time.  Again, video cameras may only offer
> "internal"
> as trigger type.
> 
> Perhaps v4l2 already offers something that I overlooked.  If not,
> what
> would be good ways to extend it?

In order to support the new Android Camera HAL (and other use cases),
there is work in progress to introduce some new API, they call it the
Request API.

https://lwn.net/Articles/641204/

> 
> Regards,
> 
> Michael
> --
> To unsubscribe from this list: send the line "unsubscribe linux-
> media" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Still images and v4l2?

2016-10-28 Thread Michael Haardt
I am currently developing a new image v4l2 sensor driver to acquire
sequences of still images and wonder how to interface that to the
v4l2 API.

Currently, cameras are assumed to deliver an endless stream of images
after being triggered internally with VIDIOC_STREAMON.  If supported by
the driver, a certain frame rate is used.

For precise image capturing, I need two additional features:

Limiting the number of captured images: It is desirable not having to stop
streaming from user space for camera latency.  A typical application
are single shots at random times, and possibly with little time in
between the end of one image and start of a new one, so an image that
could not be stopped in time would be a problem.  A video camera would
only support the limit value "unlimited" as possible capturing limit.
Scientific cameras may offer more, or possibly only limited capturing.

Configuring the capture trigger: Right now sensors are implicitly
triggered internally from the driver.  Being able to configure external
triggers, which many sensors support, is needed to start capturing at
exactly the right time.  Again, video cameras may only offer "internal"
as trigger type.

Perhaps v4l2 already offers something that I overlooked.  If not, what
would be good ways to extend it?

Regards,

Michael
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 11/13] v4l: vsp1: Determine partition requirements for scaled images

2016-09-15 Thread Niklas Söderlund
On 2016-09-14 23:00:33 +0300, Laurent Pinchart wrote:
> Hi Niklas,
> 
> On Wednesday 14 Sep 2016 21:27:33 Niklas Söderlund wrote:
> > On 2016-09-14 02:17:04 +0300, Laurent Pinchart wrote:
> > > From: Kieran Bingham 
> > > 
> > > The partition algorithm needs to determine the capabilities of each
> > > entity in the pipeline to identify the correct maximum partition width.
> > > 
> > > Extend the vsp1 entity operations to provide a max_width operation and
> > > use this call to calculate the number of partitions that will be
> > > processed by the algorithm.
> > > 
> > > Gen 2 hardware does not require multiple partitioning, and as such
> > > will always return a single partition.
> > > 
> > > Signed-off-by: Kieran Bingham 
> > > Signed-off-by: Laurent Pinchart
> > > 
> > 
> > I can't find the information about the partition limitations for SRU or
> > UDS in any of the documents I have.
> 
> That's because it's not documented in the datasheet :-(

Sometimes a kind soul provides you with the proper documentation :-)

Acked-by: Niklas Söderlund 

> 
> > But for the parts not relating to the logic of figuring out the hscale from
> > the input/output formats width:
> > 
> > Reviewed-by: Niklas Söderlund 
> 
> Thanks.
> 
> -- 
> Regards,
> 
> Laurent Pinchart
> 

-- 
Regards,
Niklas Söderlund
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 11/13] v4l: vsp1: Determine partition requirements for scaled images

2016-09-14 Thread Laurent Pinchart
Hi Niklas,

On Wednesday 14 Sep 2016 21:27:33 Niklas Söderlund wrote:
> On 2016-09-14 02:17:04 +0300, Laurent Pinchart wrote:
> > From: Kieran Bingham 
> > 
> > The partition algorithm needs to determine the capabilities of each
> > entity in the pipeline to identify the correct maximum partition width.
> > 
> > Extend the vsp1 entity operations to provide a max_width operation and
> > use this call to calculate the number of partitions that will be
> > processed by the algorithm.
> > 
> > Gen 2 hardware does not require multiple partitioning, and as such
> > will always return a single partition.
> > 
> > Signed-off-by: Kieran Bingham 
> > Signed-off-by: Laurent Pinchart
> > 
> 
> I can't find the information about the partition limitations for SRU or
> UDS in any of the documents I have.

That's because it's not documented in the datasheet :-(

> But for the parts not relating to the logic of figuring out the hscale from
> the input/output formats width:
> 
> Reviewed-by: Niklas Söderlund 

Thanks.

-- 
Regards,

Laurent Pinchart

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 11/13] v4l: vsp1: Determine partition requirements for scaled images

2016-09-14 Thread Niklas Söderlund
On 2016-09-14 02:17:04 +0300, Laurent Pinchart wrote:
> From: Kieran Bingham 
> 
> The partition algorithm needs to determine the capabilities of each
> entity in the pipeline to identify the correct maximum partition width.
> 
> Extend the vsp1 entity operations to provide a max_width operation and
> use this call to calculate the number of partitions that will be
> processed by the algorithm.
> 
> Gen 2 hardware does not require multiple partitioning, and as such
> will always return a single partition.
> 
> Signed-off-by: Kieran Bingham 
> Signed-off-by: Laurent Pinchart 

I can't find the information about the partition limitations for SRU or 
UDS in any of the documents I have. But for the parts not relating to 
the logic of figuring out the hscale from the input/output formats 
width:

Reviewed-by: Niklas Söderlund 

> ---
>  drivers/media/platform/vsp1/vsp1_entity.h |  3 +++
>  drivers/media/platform/vsp1/vsp1_pipe.h   |  5 
>  drivers/media/platform/vsp1/vsp1_sru.c| 19 +++
>  drivers/media/platform/vsp1/vsp1_uds.c| 25 +++
>  drivers/media/platform/vsp1/vsp1_video.c  | 40 
> +++
>  5 files changed, 92 insertions(+)
> 
> diff --git a/drivers/media/platform/vsp1/vsp1_entity.h 
> b/drivers/media/platform/vsp1/vsp1_entity.h
> index 0e3e394c44cd..90a4d95c0a50 100644
> --- a/drivers/media/platform/vsp1/vsp1_entity.h
> +++ b/drivers/media/platform/vsp1/vsp1_entity.h
> @@ -77,11 +77,14 @@ struct vsp1_route {
>   * @destroy: Destroy the entity.
>   * @configure:   Setup the hardware based on the entity state (pipeline, 
> formats,
>   *   selection rectangles, ...)
> + * @max_width:   Return the max supported width of data that the entity 
> can
> + *   process in a single operation.
>   */
>  struct vsp1_entity_operations {
>   void (*destroy)(struct vsp1_entity *);
>   void (*configure)(struct vsp1_entity *, struct vsp1_pipeline *,
> struct vsp1_dl_list *, enum vsp1_entity_params);
> + unsigned int (*max_width)(struct vsp1_entity *, struct vsp1_pipeline *);
>  };
>  
>  struct vsp1_entity {
> diff --git a/drivers/media/platform/vsp1/vsp1_pipe.h 
> b/drivers/media/platform/vsp1/vsp1_pipe.h
> index d20d997b1fda..af4cd23d399b 100644
> --- a/drivers/media/platform/vsp1/vsp1_pipe.h
> +++ b/drivers/media/platform/vsp1/vsp1_pipe.h
> @@ -77,6 +77,8 @@ enum vsp1_pipeline_state {
>   * @uds_input: entity at the input of the UDS, if the UDS is present
>   * @entities: list of entities in the pipeline
>   * @dl: display list associated with the pipeline
> + * @div_size: The maximum allowed partition size for the pipeline
> + * @partitions: The number of partitions used to process one frame
>   */
>  struct vsp1_pipeline {
>   struct media_pipeline pipe;
> @@ -104,6 +106,9 @@ struct vsp1_pipeline {
>   struct list_head entities;
>  
>   struct vsp1_dl_list *dl;
> +
> + unsigned int div_size;
> + unsigned int partitions;
>  };
>  
>  void vsp1_pipeline_reset(struct vsp1_pipeline *pipe);
> diff --git a/drivers/media/platform/vsp1/vsp1_sru.c 
> b/drivers/media/platform/vsp1/vsp1_sru.c
> index 9d4a1afb6634..b4e568a3b4ed 100644
> --- a/drivers/media/platform/vsp1/vsp1_sru.c
> +++ b/drivers/media/platform/vsp1/vsp1_sru.c
> @@ -306,8 +306,27 @@ static void sru_configure(struct vsp1_entity *entity,
>   vsp1_sru_write(sru, dl, VI6_SRU_CTRL2, param->ctrl2);
>  }
>  
> +static unsigned int sru_max_width(struct vsp1_entity *entity,
> +   struct vsp1_pipeline *pipe)
> +{
> + struct vsp1_sru *sru = to_sru(>subdev);
> + struct v4l2_mbus_framefmt *input;
> + struct v4l2_mbus_framefmt *output;
> +
> + input = vsp1_entity_get_pad_format(>entity, sru->entity.config,
> +SRU_PAD_SINK);
> + output = vsp1_entity_get_pad_format(>entity, sru->entity.config,
> + SRU_PAD_SOURCE);
> +
> + if (input->width != output->width)
> + return 512;
> + else
> + return 256;
> +}
> +
>  static const struct vsp1_entity_operations sru_entity_ops = {
>   .configure = sru_configure,
> + .max_width = sru_max_width,
>  };
>  
>  /* 
> -
> diff --git a/drivers/media/platform/vsp1/vsp1_uds.c 
> b/drivers/media/platform/vsp1/vsp1_uds.c
> index 62beae5d6944..706b6e85f47d 100644
> --- a/drivers/media/platform/vsp1/vsp1_uds.c
> +++ b/drivers/media/platform/vsp1/vsp1_uds.c
> @@ -311,8 +311,33 @@ static void uds_configure(struct vsp1_entity *entity,
>  (output->height << VI6_UDS_CLIP_SIZE_VSIZE_SHIFT));
>  }
>  
> +static unsigned int uds_max_width(struct vsp1_entity *entity,
> +   struct 

[PATCH 11/13] v4l: vsp1: Determine partition requirements for scaled images

2016-09-13 Thread Laurent Pinchart
From: Kieran Bingham 

The partition algorithm needs to determine the capabilities of each
entity in the pipeline to identify the correct maximum partition width.

Extend the vsp1 entity operations to provide a max_width operation and
use this call to calculate the number of partitions that will be
processed by the algorithm.

Gen 2 hardware does not require multiple partitioning, and as such
will always return a single partition.

Signed-off-by: Kieran Bingham 
Signed-off-by: Laurent Pinchart 
---
 drivers/media/platform/vsp1/vsp1_entity.h |  3 +++
 drivers/media/platform/vsp1/vsp1_pipe.h   |  5 
 drivers/media/platform/vsp1/vsp1_sru.c| 19 +++
 drivers/media/platform/vsp1/vsp1_uds.c| 25 +++
 drivers/media/platform/vsp1/vsp1_video.c  | 40 +++
 5 files changed, 92 insertions(+)

diff --git a/drivers/media/platform/vsp1/vsp1_entity.h 
b/drivers/media/platform/vsp1/vsp1_entity.h
index 0e3e394c44cd..90a4d95c0a50 100644
--- a/drivers/media/platform/vsp1/vsp1_entity.h
+++ b/drivers/media/platform/vsp1/vsp1_entity.h
@@ -77,11 +77,14 @@ struct vsp1_route {
  * @destroy:   Destroy the entity.
  * @configure: Setup the hardware based on the entity state (pipeline, formats,
  * selection rectangles, ...)
+ * @max_width: Return the max supported width of data that the entity can
+ * process in a single operation.
  */
 struct vsp1_entity_operations {
void (*destroy)(struct vsp1_entity *);
void (*configure)(struct vsp1_entity *, struct vsp1_pipeline *,
  struct vsp1_dl_list *, enum vsp1_entity_params);
+   unsigned int (*max_width)(struct vsp1_entity *, struct vsp1_pipeline *);
 };
 
 struct vsp1_entity {
diff --git a/drivers/media/platform/vsp1/vsp1_pipe.h 
b/drivers/media/platform/vsp1/vsp1_pipe.h
index d20d997b1fda..af4cd23d399b 100644
--- a/drivers/media/platform/vsp1/vsp1_pipe.h
+++ b/drivers/media/platform/vsp1/vsp1_pipe.h
@@ -77,6 +77,8 @@ enum vsp1_pipeline_state {
  * @uds_input: entity at the input of the UDS, if the UDS is present
  * @entities: list of entities in the pipeline
  * @dl: display list associated with the pipeline
+ * @div_size: The maximum allowed partition size for the pipeline
+ * @partitions: The number of partitions used to process one frame
  */
 struct vsp1_pipeline {
struct media_pipeline pipe;
@@ -104,6 +106,9 @@ struct vsp1_pipeline {
struct list_head entities;
 
struct vsp1_dl_list *dl;
+
+   unsigned int div_size;
+   unsigned int partitions;
 };
 
 void vsp1_pipeline_reset(struct vsp1_pipeline *pipe);
diff --git a/drivers/media/platform/vsp1/vsp1_sru.c 
b/drivers/media/platform/vsp1/vsp1_sru.c
index 9d4a1afb6634..b4e568a3b4ed 100644
--- a/drivers/media/platform/vsp1/vsp1_sru.c
+++ b/drivers/media/platform/vsp1/vsp1_sru.c
@@ -306,8 +306,27 @@ static void sru_configure(struct vsp1_entity *entity,
vsp1_sru_write(sru, dl, VI6_SRU_CTRL2, param->ctrl2);
 }
 
+static unsigned int sru_max_width(struct vsp1_entity *entity,
+ struct vsp1_pipeline *pipe)
+{
+   struct vsp1_sru *sru = to_sru(>subdev);
+   struct v4l2_mbus_framefmt *input;
+   struct v4l2_mbus_framefmt *output;
+
+   input = vsp1_entity_get_pad_format(>entity, sru->entity.config,
+  SRU_PAD_SINK);
+   output = vsp1_entity_get_pad_format(>entity, sru->entity.config,
+   SRU_PAD_SOURCE);
+
+   if (input->width != output->width)
+   return 512;
+   else
+   return 256;
+}
+
 static const struct vsp1_entity_operations sru_entity_ops = {
.configure = sru_configure,
+   .max_width = sru_max_width,
 };
 
 /* 
-
diff --git a/drivers/media/platform/vsp1/vsp1_uds.c 
b/drivers/media/platform/vsp1/vsp1_uds.c
index 62beae5d6944..706b6e85f47d 100644
--- a/drivers/media/platform/vsp1/vsp1_uds.c
+++ b/drivers/media/platform/vsp1/vsp1_uds.c
@@ -311,8 +311,33 @@ static void uds_configure(struct vsp1_entity *entity,
   (output->height << VI6_UDS_CLIP_SIZE_VSIZE_SHIFT));
 }
 
+static unsigned int uds_max_width(struct vsp1_entity *entity,
+ struct vsp1_pipeline *pipe)
+{
+   struct vsp1_uds *uds = to_uds(>subdev);
+   const struct v4l2_mbus_framefmt *output;
+   const struct v4l2_mbus_framefmt *input;
+   unsigned int hscale;
+
+   input = vsp1_entity_get_pad_format(>entity, uds->entity.config,
+  UDS_PAD_SINK);
+   output = vsp1_entity_get_pad_format(>entity, uds->entity.config,
+   UDS_PAD_SOURCE);
+   hscale = output->width / input->width;
+
+   if (hscale <= 2)
+

[PATCH 15/20] [media] dev-sliced-vbi.rst: use a footnote for VBI images

2016-08-18 Thread Mauro Carvalho Chehab
Just like on dvb-raw-vbi.rst, the LaTeX output doesn't work
well with cell spans. Also, this is actually a note, so, move
it to a footnote.

Signed-off-by: Mauro Carvalho Chehab 
---
 Documentation/media/uapi/v4l/dev-sliced-vbi.rst | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/Documentation/media/uapi/v4l/dev-sliced-vbi.rst 
b/Documentation/media/uapi/v4l/dev-sliced-vbi.rst
index 9f59ba6847ec..9f348e164782 100644
--- a/Documentation/media/uapi/v4l/dev-sliced-vbi.rst
+++ b/Documentation/media/uapi/v4l/dev-sliced-vbi.rst
@@ -155,8 +155,7 @@ struct v4l2_sliced_vbi_format
  service the driver chooses.
 
  Data services are defined in :ref:`vbi-services2`. Array indices
- map to ITU-R line numbers (see also :ref:`vbi-525` and
- :ref:`vbi-625`) as follows:
+ map to ITU-R line numbers\ [#f2]_ as follows:
 
 -  .. row 3
 
@@ -838,3 +837,6 @@ Line Identifiers for struct v4l2_mpeg_vbi_itv0_line id field
 .. [#f1]
According to :ref:`ETS 300 706 ` lines 6-22 of the first
field and lines 5-22 of the second field may carry Teletext data.
+
+.. [#f2]
+   See also :ref:`vbi-525` and :ref:`vbi-625`.
-- 
2.7.4


--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: capture high resolution images from webcam

2015-04-13 Thread Laurent Pinchart
Hi Oliver,

On Thursday 19 March 2015 20:31:43 Oliver Lehmann wrote:
 Hi Laurent,
 
 I took the first option ;)
 
 http://pastebin.com/7YUgS2Zt

I have good news and bad news.

The good news is that the camera seems to support capturing video in 1920x1080 
natively, which is higher than the 720p you reported. This should work out of 
the box with the uvcvideo driver.

The bad news is that still image capture at higher resolutions isn't supported 
by the camera, or at least not in a UVC-compatible way. My guess is that 8MP 
is achieved either using software interpolation, or possibly using a vendor-
specific undocumented protocol. 

-- 
Regards,

Laurent Pinchart

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: capture high resolution images from webcam

2015-03-19 Thread Laurent Pinchart
Hi Oliver,

On Thursday 19 March 2015 19:17:48 Oliver Lehmann wrote:
 Hi Laurent,
 
 Laurent Pinchart laurent.pinch...@ideasonboard.com wrote:
  Could you please post the output of lsusb -v for your camera (running as
  root if possible) ?
 
 No lsusb on FreeBSD but usbconfig - I hope the results are similar or are at
 least close to what you are seeking for.
 
 http://pastebin.com/s3ndu63h

The information is there, but in a raw form, when lsusb can decode it. I'll 
let you do the homework then, two options:

1. Compile lsusb for freebsd
2. Get a copy of the UVC spec 
(http://www.usb.org/developers/docs/devclass_docs/USB_Video_Class_1_1_090711.zip)
 
and decode the descriptors

Pick your poison :-)

-- 
Regards,

Laurent Pinchart

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: capture high resolution images from webcam

2015-03-19 Thread Oliver Lehmann

Hi Laurent,

I took the first option ;)

http://pastebin.com/7YUgS2Zt

Regards, Oliver
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: capture high resolution images from webcam

2015-03-19 Thread Oliver Lehmann

Hi Laurent,

Laurent Pinchart laurent.pinch...@ideasonboard.com wrote:


Could you please post the output of lsusb -v for your camera (running as
root if possible) ?


No lsusb on FreeBSD but usbconfig - I hope the results are similar or are at
least close to what you are seeking for.

http://pastebin.com/s3ndu63h

Regards, Oliver
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


capture high resolution images from webcam

2015-03-17 Thread Oliver Lehmann

Hi,

I'm using v4l2 on FreeBSD but I hope this doesn't matter that much.
I got a new MS LifeCam Studio HD which makes quite good pictures
because of its focus possibilites.

When I use the original software provided by MS the autofocus
feature works damn good. With v4l2, autofocus is enabled but it
just does not focus. Disabling autofocus and setting focus manually
does work (and in my case this is sufficient)

Another point is, that this cam can record pictures with 8 megapixel
which results in 3840x2160 image files. This 8MP mode and the 1080p
mode is only available for snapshot pictures. The highest resolution
supported for videos is 720p.

All I want is recording snapshot images and I do not need the video
capability at all.

I wonder how I can capture those big 8MP images? With mplayer I'm
only able toe capture 720p at max. I guess because mplayer just
accesses the video mode and takes a single frame.

mplayer tv:// -tv driver=v4l2:device=/dev/video0:width=1280:height=720  
-frames 1 -vo jpeg


I wonder if there is a possibility to access the cam in the
I-call-it-snapshot-mode to take single pictures with higher resolutions?

Regards, Oliver
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Corrupt images, when capturing images from multiple cameras using the V4L2 driver

2014-09-08 Thread Alaganraj
Hi Karel,
On Friday 05 September 2014 06:09 PM, Mácha, Karel wrote:
 Hello Alaganraj,

 thank you very much for your reply and your tips. I tried to improve my
 code as you suggested and also implemented several other things, which
 should improve the result.

 I set the cameras to 15fps using VIDIOC_S_PARM. This reduces the
 frequency of corrupt images.
 I zero fill any structs before using ioctl with the #define CLEAR(x)
 memset((x), 0, sizeof(x)) macro.
 I use select() before VIDIOC_DQBUF.
 I copy the buffer and create the resulting image from the copy, so I can
 put the original buffer back immediately.
Instead of copy, increase no of buffers to 6 or 8 while REQBUFS.
As you are storing as image, i think it's ok to have frame drops.
 I tested my code with valgrind and there are no memory leaks.

 However I still get corrupt images. If I use 4 cameras I gat almost no
 bad images, 6 cameras produces mostly good pictures, but 8 cams produces
 mostly not single one error-free set.

 Sometimes the buffer.bytesused returns less bytes then x-resolution x
 y-resolution and I get bad pictures, sometimes I get bad pictures even
 if the bytesused value is ok.

 The most surprising finding is, how significant the sleep() time, before
 every VIDIOC_STREAMON improves the result. If I wait 5 seconds, before I
 turn-on the next camera, I got following results.
I don't know how this delay impacts the result. May be experts can guide you.
Always cc to people, for example you've taken this code from Laurent's 
presentation,
so cc to him to get attention.
 - first 8  images mostly corrupt
 - second 8 images: few of them bad
 - next 5-6 series! are clear
 - after the 7th, 8th image series (camera1 ... camera8) artifacts appear
 again. The longer the capture works, the more artifacts appears.

 I put the improved version of the code here:
 http://pastebin.mozilla.org/6330868

 I really do not understand what is the source of my problems.

 Greetings - Karel

 Am Freitag, den 05.09.2014, 01:41 +0530 schrieb Alaganraj Sandhanam:
 Hi Karel,

 I suggest you to zero fill v4l2 structures before assign values also
 check the return value of all ioctl call.
 for example,

 struct v4l2_format fmt;

 memset(fmt, 0, sizeof fmt);
 fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
 fmt.fmt.pix.width = xRes;
 ret = ioctl(fd, VIDIOC_S_FMT, fmt);
 if (ret  0)
printf(VIDIOC_S_FMT failed: %d\n, ret);

 Please find comments in-line.

 On Tuesday 02 September 2014 05:36 PM, Mácha, Karel wrote:
 Hello, 

 I would like to grab images from multiple cameras under using the V4L2
 API. I followed the presentation under found on
 http://linuxtv.org/downloads/presentations/summit_jun_2010/20100206-fosdem.pdf
  
 used the code and adapted it sightly for my purpose. It works very well
 for 1 camera.

 However Once I begin to grab images from multiple cameras (successively)
 I get corrupt images. I uploaded an example image to
 http://www.directupload.net/file/d/3733/9c4jx3pv_png.htm

 Although I set the right resolution for the camera (744 x 480), the
 output of buffer.bytesused, after the VIDIOC_DQBUF does not correspond
 with the expected value (744x480 = 357120). This would probably explain
 the corrupt images.

 The more camera I use, the less buffer.bytesused I get and the more
 stripes are in the image. Could you please give me a hint, what am I
 doing wrong ?

 Thanks, Karel

 Here is the minimal C code I use for my application:


 int main()
 {
 /* # INIT # */

 int numOfCameras = 6;
 As it works well for 1 camera, try with only 2 instead of 6
 int xRes = 744;
 int yRes = 480;
 int exposure = 2000;
 unsigned int timeBetweenSnapshots = 2; // in sec
 char fileName[sizeof ./output/image 000 from camera 0.PNG];

 static const char *devices[] = { /dev/video0, /dev/video1,
 /dev/video2, /dev/video3, /dev/video4, /dev/video5,
 /dev/video6, /dev/video7 };

 struct v4l2_capability cap[8];
 struct v4l2_control control[8];
 struct v4l2_format format[8];
 struct v4l2_requestbuffers req[8];
 struct v4l2_buffer buffer[8];

 int type = V4L2_BUF_TYPE_VIDEO_CAPTURE; // had to declare the type here
 because of the loop

 unsigned int i;
 unsigned int j;
 unsigned int k;

 int fd[8];
 void **mem[8];
 //unsigned char **mem[8];

 /* # OPEN DEVICE # */

 for (j = 0; j  numOfCameras; ++j) {

 fd[j] = open(devices[j], O_RDWR);
 ioctl(fd[j], VIDIOC_QUERYCAP, cap[j]);
 check the return value

 /* # CAM CONTROLL ### */

 zero fill control[j]
 memset(control[j], 0, sizeof control[j]);
 control[j].id = V4L2_CID_EXPOSURE_AUTO;
 control[j].value = V4L2_EXPOSURE_SHUTTER_PRIORITY;
 ioctl(fd[j], VIDIOC_S_CTRL, control[j]);

 control[j].id = V4L2_CID_EXPOSURE_ABSOLUTE

Re: Corrupt images, when capturing images from multiple cameras using the V4L2 driver

2014-09-05 Thread Mácha , Karel
Hello Alaganraj,

thank you very much for your reply and your tips. I tried to improve my
code as you suggested and also implemented several other things, which
should improve the result.

I set the cameras to 15fps using VIDIOC_S_PARM. This reduces the
frequency of corrupt images.
I zero fill any structs before using ioctl with the #define CLEAR(x)
memset((x), 0, sizeof(x)) macro.
I use select() before VIDIOC_DQBUF.
I copy the buffer and create the resulting image from the copy, so I can
put the original buffer back immediately.
I tested my code with valgrind and there are no memory leaks.

However I still get corrupt images. If I use 4 cameras I gat almost no
bad images, 6 cameras produces mostly good pictures, but 8 cams produces
mostly not single one error-free set.

Sometimes the buffer.bytesused returns less bytes then x-resolution x
y-resolution and I get bad pictures, sometimes I get bad pictures even
if the bytesused value is ok.

The most surprising finding is, how significant the sleep() time, before
every VIDIOC_STREAMON improves the result. If I wait 5 seconds, before I
turn-on the next camera, I got following results:
- first 8  images mostly corrupt
- second 8 images: few of them bad
- next 5-6 series! are clear
- after the 7th, 8th image series (camera1 ... camera8) artifacts appear
again. The longer the capture works, the more artifacts appears.

I put the improved version of the code here:
http://pastebin.mozilla.org/6330868

I really do not understand what is the source of my problems.

Greetings - Karel

Am Freitag, den 05.09.2014, 01:41 +0530 schrieb Alaganraj Sandhanam:
 Hi Karel,
 
 I suggest you to zero fill v4l2 structures before assign values also
 check the return value of all ioctl call.
 for example,
 
 struct v4l2_format fmt;
 
 memset(fmt, 0, sizeof fmt);
 fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
 fmt.fmt.pix.width = xRes;
 ret = ioctl(fd, VIDIOC_S_FMT, fmt);
 if (ret  0)
printf(VIDIOC_S_FMT failed: %d\n, ret);
 
 Please find comments in-line.
 
 On Tuesday 02 September 2014 05:36 PM, Mácha, Karel wrote:
  Hello, 
  
  I would like to grab images from multiple cameras under using the V4L2
  API. I followed the presentation under found on
  http://linuxtv.org/downloads/presentations/summit_jun_2010/20100206-fosdem.pdf
   
  used the code and adapted it sightly for my purpose. It works very well
  for 1 camera.
  
  However Once I begin to grab images from multiple cameras (successively)
  I get corrupt images. I uploaded an example image to
  http://www.directupload.net/file/d/3733/9c4jx3pv_png.htm
  
  Although I set the right resolution for the camera (744 x 480), the
  output of buffer.bytesused, after the VIDIOC_DQBUF does not correspond
  with the expected value (744x480 = 357120). This would probably explain
  the corrupt images.
  
  The more camera I use, the less buffer.bytesused I get and the more
  stripes are in the image. Could you please give me a hint, what am I
  doing wrong ?
  
  Thanks, Karel
  
  Here is the minimal C code I use for my application:
  
  
  int main()
  {
  /* # INIT # */
  
  int numOfCameras = 6;
 As it works well for 1 camera, try with only 2 instead of 6
  int xRes = 744;
  int yRes = 480;
  int exposure = 2000;
  unsigned int timeBetweenSnapshots = 2; // in sec
  char fileName[sizeof ./output/image 000 from camera 0.PNG];
  
  static const char *devices[] = { /dev/video0, /dev/video1,
  /dev/video2, /dev/video3, /dev/video4, /dev/video5,
  /dev/video6, /dev/video7 };
  
  struct v4l2_capability cap[8];
  struct v4l2_control control[8];
  struct v4l2_format format[8];
  struct v4l2_requestbuffers req[8];
  struct v4l2_buffer buffer[8];
  
  int type = V4L2_BUF_TYPE_VIDEO_CAPTURE; // had to declare the type here
  because of the loop
  
  unsigned int i;
  unsigned int j;
  unsigned int k;
  
  int fd[8];
  void **mem[8];
  //unsigned char **mem[8];
  
  /* # OPEN DEVICE # */
  
  for (j = 0; j  numOfCameras; ++j) {
  
  fd[j] = open(devices[j], O_RDWR);
  ioctl(fd[j], VIDIOC_QUERYCAP, cap[j]);
 check the return value
  
  
  /* # CAM CONTROLL ### */
  
 zero fill control[j]
 memset(control[j], 0, sizeof control[j]);
  control[j].id = V4L2_CID_EXPOSURE_AUTO;
  control[j].value = V4L2_EXPOSURE_SHUTTER_PRIORITY;
  ioctl(fd[j], VIDIOC_S_CTRL, control[j]);
  
  control[j].id = V4L2_CID_EXPOSURE_ABSOLUTE;
  control[j].value = exposure;
  ioctl(fd[j], VIDIOC_S_CTRL, control[j]);
  
  /* # FORMAT # */
  
 zero fill format[j]
 memset(format[j], 0, sizeof format[j]);
  ioctl(fd[j], VIDIOC_G_FMT, format[j]);
  format[j].type

Re: Corrupt images, when capturing images from multiple cameras using the V4L2 driver

2014-09-04 Thread Alaganraj Sandhanam
Hi Karel,

I suggest you to zero fill v4l2 structures before assign values also
check the return value of all ioctl call.
for example,

struct v4l2_format fmt;

memset(fmt, 0, sizeof fmt);
fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
fmt.fmt.pix.width = xRes;
ret = ioctl(fd, VIDIOC_S_FMT, fmt);
if (ret  0)
   printf(VIDIOC_S_FMT failed: %d\n, ret);

Please find comments in-line.

On Tuesday 02 September 2014 05:36 PM, Mácha, Karel wrote:
 Hello, 
 
 I would like to grab images from multiple cameras under using the V4L2
 API. I followed the presentation under found on
 http://linuxtv.org/downloads/presentations/summit_jun_2010/20100206-fosdem.pdf
  
 used the code and adapted it sightly for my purpose. It works very well
 for 1 camera.
 
 However Once I begin to grab images from multiple cameras (successively)
 I get corrupt images. I uploaded an example image to
 http://www.directupload.net/file/d/3733/9c4jx3pv_png.htm
 
 Although I set the right resolution for the camera (744 x 480), the
 output of buffer.bytesused, after the VIDIOC_DQBUF does not correspond
 with the expected value (744x480 = 357120). This would probably explain
 the corrupt images.
 
 The more camera I use, the less buffer.bytesused I get and the more
 stripes are in the image. Could you please give me a hint, what am I
 doing wrong ?
 
 Thanks, Karel
 
 Here is the minimal C code I use for my application:
 
 
 int main()
 {
   /* # INIT # */
 
   int numOfCameras = 6;
As it works well for 1 camera, try with only 2 instead of 6
   int xRes = 744;
   int yRes = 480;
   int exposure = 2000;
   unsigned int timeBetweenSnapshots = 2; // in sec
   char fileName[sizeof ./output/image 000 from camera 0.PNG];
 
   static const char *devices[] = { /dev/video0, /dev/video1,
 /dev/video2, /dev/video3, /dev/video4, /dev/video5,
 /dev/video6, /dev/video7 };
 
   struct v4l2_capability cap[8];
   struct v4l2_control control[8];
   struct v4l2_format format[8];
   struct v4l2_requestbuffers req[8];
   struct v4l2_buffer buffer[8];
 
   int type = V4L2_BUF_TYPE_VIDEO_CAPTURE; // had to declare the type here
 because of the loop
 
   unsigned int i;
   unsigned int j;
   unsigned int k;
 
   int fd[8];
   void **mem[8];
   //unsigned char **mem[8];
 
   /* # OPEN DEVICE # */
 
   for (j = 0; j  numOfCameras; ++j) {
 
   fd[j] = open(devices[j], O_RDWR);
   ioctl(fd[j], VIDIOC_QUERYCAP, cap[j]);
check the return value
 
 
   /* # CAM CONTROLL ### */
 
zero fill control[j]
memset(control[j], 0, sizeof control[j]);
   control[j].id = V4L2_CID_EXPOSURE_AUTO;
   control[j].value = V4L2_EXPOSURE_SHUTTER_PRIORITY;
   ioctl(fd[j], VIDIOC_S_CTRL, control[j]);
 
   control[j].id = V4L2_CID_EXPOSURE_ABSOLUTE;
   control[j].value = exposure;
   ioctl(fd[j], VIDIOC_S_CTRL, control[j]);
 
   /* # FORMAT # */
 
zero fill format[j]
memset(format[j], 0, sizeof format[j]);
   ioctl(fd[j], VIDIOC_G_FMT, format[j]);
   format[j].type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
   format[j].fmt.pix.width = xRes;
   format[j].fmt.pix.height = yRes;
   //format.fmt.pix.pixelformat = V4L2_PIX_FMT_YUYV;
   format[j].fmt.pix.pixelformat = V4L2_PIX_FMT_GREY;
   ioctl(fd[j], VIDIOC_S_FMT, format[j]);
 
   /* # REQ BUF  */
 
memset(req[j], 0, sizeof req[j]);
   req[j].type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
   req[j].count = 4;
   req[j].memory = V4L2_MEMORY_MMAP;
   ioctl(fd[j], VIDIOC_REQBUFS, req[j]);
   mem[j] = malloc(req[j].count * sizeof(*mem));
 
   /* # MMAP # */
 
   for (i = 0; i  req[j].count; ++i) {
memset(buffer[j], 0, sizeof buffer[j]);
   buffer[j].type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
   buffer[j].memory = V4L2_MEMORY_MMAP;
   buffer[j].index = i;
   ioctl(fd[j], VIDIOC_QUERYBUF, buffer[j]);
   mem[j][i] = mmap(0, buffer[j].length,
   PROT_READ|PROT_WRITE,
   MAP_SHARED, fd[j], buffer[j].m.offset);
   }
 
   /* # CREATE QUEUE ### */
 
   for (i = 0; i  req[j].count; ++i) {
memset(buffer[j], 0, sizeof buffer[j]);
   buffer[j].type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
   buffer[j].memory = V4L2_MEMORY_MMAP;
   buffer[j].index = i

Corrupt images, when capturing images from multiple cameras using the V4L2 driver

2014-09-02 Thread Mácha , Karel
Hello, 

I would like to grab images from multiple cameras under using the V4L2
API. I followed the presentation under found on
http://linuxtv.org/downloads/presentations/summit_jun_2010/20100206-fosdem.pdf 
used the code and adapted it sightly for my purpose. It works very well
for 1 camera.

However Once I begin to grab images from multiple cameras (successively)
I get corrupt images. I uploaded an example image to
http://www.directupload.net/file/d/3733/9c4jx3pv_png.htm

Although I set the right resolution for the camera (744 x 480), the
output of buffer.bytesused, after the VIDIOC_DQBUF does not correspond
with the expected value (744x480 = 357120). This would probably explain
the corrupt images.

The more camera I use, the less buffer.bytesused I get and the more
stripes are in the image. Could you please give me a hint, what am I
doing wrong ?

Thanks, Karel

Here is the minimal C code I use for my application:


int main()
{
/* # INIT # */

int numOfCameras = 6;
int xRes = 744;
int yRes = 480;
int exposure = 2000;
unsigned int timeBetweenSnapshots = 2; // in sec
char fileName[sizeof ./output/image 000 from camera 0.PNG];

static const char *devices[] = { /dev/video0, /dev/video1,
/dev/video2, /dev/video3, /dev/video4, /dev/video5,
/dev/video6, /dev/video7 };

struct v4l2_capability cap[8];
struct v4l2_control control[8];
struct v4l2_format format[8];
struct v4l2_requestbuffers req[8];
struct v4l2_buffer buffer[8];

int type = V4L2_BUF_TYPE_VIDEO_CAPTURE; // had to declare the type here
because of the loop

unsigned int i;
unsigned int j;
unsigned int k;

int fd[8];
void **mem[8];
//unsigned char **mem[8];

/* # OPEN DEVICE # */

for (j = 0; j  numOfCameras; ++j) {

fd[j] = open(devices[j], O_RDWR);
ioctl(fd[j], VIDIOC_QUERYCAP, cap[j]);


/* # CAM CONTROLL ### */

control[j].id = V4L2_CID_EXPOSURE_AUTO;
control[j].value = V4L2_EXPOSURE_SHUTTER_PRIORITY;
ioctl(fd[j], VIDIOC_S_CTRL, control[j]);

control[j].id = V4L2_CID_EXPOSURE_ABSOLUTE;
control[j].value = exposure;
ioctl(fd[j], VIDIOC_S_CTRL, control[j]);

/* # FORMAT # */

ioctl(fd[j], VIDIOC_G_FMT, format[j]);
format[j].type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
format[j].fmt.pix.width = xRes;
format[j].fmt.pix.height = yRes;
//format.fmt.pix.pixelformat = V4L2_PIX_FMT_YUYV;
format[j].fmt.pix.pixelformat = V4L2_PIX_FMT_GREY;
ioctl(fd[j], VIDIOC_S_FMT, format[j]);

/* # REQ BUF  */

req[j].type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
req[j].count = 4;
req[j].memory = V4L2_MEMORY_MMAP;
ioctl(fd[j], VIDIOC_REQBUFS, req[j]);
mem[j] = malloc(req[j].count * sizeof(*mem));

/* # MMAP # */

for (i = 0; i  req[j].count; ++i) {
buffer[j].type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buffer[j].memory = V4L2_MEMORY_MMAP;
buffer[j].index = i;
ioctl(fd[j], VIDIOC_QUERYBUF, buffer[j]);
mem[j][i] = mmap(0, buffer[j].length,
PROT_READ|PROT_WRITE,
MAP_SHARED, fd[j], buffer[j].m.offset);
}

/* # CREATE QUEUE ### */

for (i = 0; i  req[j].count; ++i) {
buffer[j].type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buffer[j].memory = V4L2_MEMORY_MMAP;
buffer[j].index = i;
ioctl(fd[j], VIDIOC_QBUF, buffer[j]);
}

} /* ### ### end of camera init ### ### */

/* # STREAM ON # */
for (j = 0; j  numOfCameras; ++j) {

ioctl(fd[j], VIDIOC_STREAMON, type);
}


/* # GET FRAME # */

k = 0;
while (!kbhit()){
k ++;

for (j = 0; j  numOfCameras; j++) {

buffer[j].type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buffer[j].memory = V4L2_MEMORY_MMAP;
usleep(10);
ioctl(fd[j], VIDIOC_DQBUF, buffer[j]);
printf(\nBuffer {%p}, Buf. Index %d, Buf. bytes used 
%d\n

Re: Comparisons of images between Dazzle DVC100, EasyCap stk1160 and Hauppauge ImapctVCB-e in Linux.

2014-04-25 Thread Steve Cookson

Hi Guys,

(I'm copying Ezequial on this because of the work he has done on the 
stk1160).


My colleague (a Doctor) had this to say on the medical images I posted 
earlier (see below):


  The impactVCB-e image is redder and less clear. The dvc100 and 
easycap seem similar to me and both of them are not as good as the 
original one.


So I have to ask how is it that the cheap little EasyCap is performing 
at the same level as the Dazzle DVC100 and better than the ImpactVCB-e?


It seems to me that the more complex and expensive DVC100 and 
ImpactVCB-e should perform well and that the EasyCap should be the 
runner up.


If the DVC100 and ImpactVCB-e had had the same love and attention that 
Ezequial has shown the EasyCap would they outperform it?


The ImpactVCB-e is easier to use internally and the Dazzle is external.  
Does the fact that the ImpactVCB-e has a PCI-e connector help it at all?


Otherwise I should just focus on EasyCap for my raw SD capture and move on.

Thanks,

Steve


On 23/04/14 16:22, Steve Cookson wrote:

Hi Guys,

I would be interested in your views of the comparisons of these 
images.  The still is the image of a duodenum taken during an 
endoscopy and recorded to a DVD player (via an s-video or composite 
cable).  Although the endoscope is an HD endoscope, the DVD recorder 
isn't and the resulting video is 720x480i59.94.


Here are further details of the video:-

Format : MPEG Video v2
Format profile : Main@Main
Format settings, BVOP : Yes, Matrix : Custom, GOP : M=3, N=15
Bit rate mode : Variable
Bit rate : 4 566 Kbps
Maximum bit rate : 10 000 Kbps
Width : 720 pixels
Height : 480 pixels
Display aspect ratio : 4:3
Frame rate : 29.970 fps
Standard : NTSC
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Interlaced
Scan order : Top Field First
Compression mode : Lossy
Bits/(Pixel*Frame) : 0.441

The video was played through Dragon Player and the video signal has 
exited through a mini-VGA port defined as 640x480 and passed through a 
VGA-S-Video converter to an s-video cable.


The cable has in turn been connected in turn to a Dazzle DVC100, an 
EasyCap stk1160 and a Hauppauge ImapctVCB-e.


Each setting (eg brightness and contrast etc) has as near as possible 
to mid-range and a screengrab taken.


The results are shown here:

Original: 
http://tinypic.com/usermedia.php?uo=fNkd6hpTbcMrgmD6gSf74Ih4l5k2TGxc


Dazzle DVC100: 
http://tinypic.com/usermedia.php?uo=fNkd6hpTbcMaOf4QTsIefYh4l5k2TGxc


ImpactVCB-e: 
http://tinypic.com/usermedia.php?uo=fNkd6hpTbcM7i72IqGujuIh4l5k2TGxc


STK1160: 
http://tinypic.com/usermedia.php?uo=fNkd6hpTbcPO7kmQk/IS94h4l5k2TGxc


I would be grateful for your views on the quality of the images.

Is one of materially higher quality than the others, or can I adjust 
the settings to improve the quality of one of them more.


It seems to me that the Hauppauge is marginally better than the 
others.  What do you think?


Can I improve the test?

Regards

Steve.



--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Comparisons of images between Dazzle DVC100, EasyCap stk1160 and Hauppauge ImapctVCB-e in Linux.

2014-04-25 Thread Steven Toth
Doh, HTML cause vger to drop it. Resending.

On Fri, Apr 25, 2014 at 8:47 AM, Steven Toth st...@kernellabs.com wrote:
 On Fri, Apr 25, 2014 at 8:19 AM, Steve Cookson i...@sca-uk.com wrote:

 Hi Guys,


 Hello again.


 (I'm copying Ezequial on this because of the work he has done on the
 stk1160).

 My colleague (a Doctor) had this to say on the medical images I posted
 earlier (see below):

   The impactVCB-e image is redder and less clear. The dvc100 and easycap
  seem similar to me and both of them are not as good as the original one.

 So I have to ask how is it that the cheap little EasyCap is performing at
 the same level as the Dazzle DVC100 and better than the ImpactVCB-e?


 Its usually either because the Linux engineers simply aren't focused on
 improving quality for the last 'mile', or that the ADC/Video digitizer
 differers between products. 'Redder' isn't usually a differentiation between
 video digitizer silicon, its miss-configuration - for example.



 It seems to me that the more complex and expensive DVC100 and ImpactVCB-e
 should perform well and that the EasyCap should be the runner up.


 Not necessarily, but I can understand why you may think that. The reality is
 - you are measuring the work of individuals who we're not paid to create
 these Linux drivers. They work well enough for their needs, they scratched
 their own itch, then they moved on, more than likely this is the case. So,
 you are seeing a mixture of different silicon, different attention to
 detail, different default values and different levels of commitment in the
 driver developers.

 If the DVC100 and ImpactVCB-e had had the same love and attention that
 Ezequial has shown the EasyCap would they outperform it?


 Hard to say. With my experience working for Hauppauge, directly in their
 engineering team, having worked on literally hundreds of products, lots of
 different video digitizer, they're all fairly close quality wise in general,
 with good signals, with the right drivers.

 What makes a good video digitizer stand out from a poor one, it how it
 performs in very poor signal conditions.



 The ImpactVCB-e is easier to use internally and the Dazzle is external.
 Does the fact that the ImpactVCB-e has a PCI-e connector help it at all?



 Invariably not.



 Otherwise I should just focus on EasyCap for my raw SD capture and move
 on.


 You need to make that judgement call. If you are building a business around
 medical imaging and have no engineering capability to improve drivers, go
 with what your gut says and what's available to you today. You shouldn't
 rely on the good will of Linux engineers to fix your business requirements
 in their spare time, in the coming weeks or months.

 Or, hire a consultant to improve the drivers for you and the rest of the
 Linux community. This what everyone else does.

 - Steve


 Thanks,

 Steve


 On 23/04/14 16:22, Steve Cookson wrote:

 Hi Guys,

 I would be interested in your views of the comparisons of these images.
 The still is the image of a duodenum taken during an endoscopy and recorded
 to a DVD player (via an s-video or composite cable).  Although the endoscope
 is an HD endoscope, the DVD recorder isn't and the resulting video is
 720x480i59.94.

 Here are further details of the video:-

 Format : MPEG Video v2
 Format profile : Main@Main
 Format settings, BVOP : Yes, Matrix : Custom, GOP : M=3, N=15
 Bit rate mode : Variable
 Bit rate : 4 566 Kbps
 Maximum bit rate : 10 000 Kbps
 Width : 720 pixels
 Height : 480 pixels
 Display aspect ratio : 4:3
 Frame rate : 29.970 fps
 Standard : NTSC
 Color space : YUV
 Chroma subsampling : 4:2:0
 Bit depth : 8 bits
 Scan type : Interlaced
 Scan order : Top Field First
 Compression mode : Lossy
 Bits/(Pixel*Frame) : 0.441

 The video was played through Dragon Player and the video signal has
 exited through a mini-VGA port defined as 640x480 and passed through a
 VGA-S-Video converter to an s-video cable.

 The cable has in turn been connected in turn to a Dazzle DVC100, an
 EasyCap stk1160 and a Hauppauge ImapctVCB-e.

 Each setting (eg brightness and contrast etc) has as near as possible to
 mid-range and a screengrab taken.

 The results are shown here:

 Original:
 http://tinypic.com/usermedia.php?uo=fNkd6hpTbcMrgmD6gSf74Ih4l5k2TGxc

 Dazzle DVC100:
 http://tinypic.com/usermedia.php?uo=fNkd6hpTbcMaOf4QTsIefYh4l5k2TGxc

 ImpactVCB-e:
 http://tinypic.com/usermedia.php?uo=fNkd6hpTbcM7i72IqGujuIh4l5k2TGxc

 STK1160:
 http://tinypic.com/usermedia.php?uo=fNkd6hpTbcPO7kmQk/IS94h4l5k2TGxc

 I would be grateful for your views on the quality of the images.

 Is one of materially higher quality than the others, or can I adjust the
 settings to improve the quality of one of them more.

 It seems to me that the Hauppauge is marginally better than the others.
 What do you think?

 Can I improve the test?

 Regards

 Steve.



 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord

Re: Comparisons of images between Dazzle DVC100, EasyCap stk1160 and Hauppauge ImapctVCB-e in Linux.

2014-04-25 Thread Hans Verkuil
I looked at the pictures as well and I noticed a few things:

The aspect ratio of the impactvcb-e isn't right, it's too narrow. What width
and height did you specify when capturing? I can't test this myself as I'm
abroad, but try 720x480 or 768x480.

I also noticed that the cx23885 driver selects slightly incorrect default
brightness, contrast and saturation values. Whether that is the cause of the
color differences is debatable, but you can try using my repo:

git://linuxtv.org/hverkuil/media_tree.git

Checkout the cx23 branch.

This branch contains a pile of cleanup patches including a conversion to the
control framework, which should ensure correct (to my knowledge) defaults for
the color controls.

Regards,

Hans

On 04/25/2014 02:48 PM, Steven Toth wrote:
 Doh, HTML cause vger to drop it. Resending.
 
 On Fri, Apr 25, 2014 at 8:47 AM, Steven Toth st...@kernellabs.com wrote:
 On Fri, Apr 25, 2014 at 8:19 AM, Steve Cookson i...@sca-uk.com wrote:

 Hi Guys,


 Hello again.


 (I'm copying Ezequial on this because of the work he has done on the
 stk1160).

 My colleague (a Doctor) had this to say on the medical images I posted
 earlier (see below):

  The impactVCB-e image is redder and less clear. The dvc100 and easycap
 seem similar to me and both of them are not as good as the original one.

 So I have to ask how is it that the cheap little EasyCap is performing at
 the same level as the Dazzle DVC100 and better than the ImpactVCB-e?


 Its usually either because the Linux engineers simply aren't focused on
 improving quality for the last 'mile', or that the ADC/Video digitizer
 differers between products. 'Redder' isn't usually a differentiation between
 video digitizer silicon, its miss-configuration - for example.



 It seems to me that the more complex and expensive DVC100 and ImpactVCB-e
 should perform well and that the EasyCap should be the runner up.


 Not necessarily, but I can understand why you may think that. The reality is
 - you are measuring the work of individuals who we're not paid to create
 these Linux drivers. They work well enough for their needs, they scratched
 their own itch, then they moved on, more than likely this is the case. So,
 you are seeing a mixture of different silicon, different attention to
 detail, different default values and different levels of commitment in the
 driver developers.

 If the DVC100 and ImpactVCB-e had had the same love and attention that
 Ezequial has shown the EasyCap would they outperform it?


 Hard to say. With my experience working for Hauppauge, directly in their
 engineering team, having worked on literally hundreds of products, lots of
 different video digitizer, they're all fairly close quality wise in general,
 with good signals, with the right drivers.

 What makes a good video digitizer stand out from a poor one, it how it
 performs in very poor signal conditions.



 The ImpactVCB-e is easier to use internally and the Dazzle is external.
 Does the fact that the ImpactVCB-e has a PCI-e connector help it at all?



 Invariably not.



 Otherwise I should just focus on EasyCap for my raw SD capture and move
 on.


 You need to make that judgement call. If you are building a business around
 medical imaging and have no engineering capability to improve drivers, go
 with what your gut says and what's available to you today. You shouldn't
 rely on the good will of Linux engineers to fix your business requirements
 in their spare time, in the coming weeks or months.

 Or, hire a consultant to improve the drivers for you and the rest of the
 Linux community. This what everyone else does.

 - Steve


 Thanks,

 Steve


 On 23/04/14 16:22, Steve Cookson wrote:

 Hi Guys,

 I would be interested in your views of the comparisons of these images.
 The still is the image of a duodenum taken during an endoscopy and recorded
 to a DVD player (via an s-video or composite cable).  Although the 
 endoscope
 is an HD endoscope, the DVD recorder isn't and the resulting video is
 720x480i59.94.

 Here are further details of the video:-

 Format : MPEG Video v2
 Format profile : Main@Main
 Format settings, BVOP : Yes, Matrix : Custom, GOP : M=3, N=15
 Bit rate mode : Variable
 Bit rate : 4 566 Kbps
 Maximum bit rate : 10 000 Kbps
 Width : 720 pixels
 Height : 480 pixels
 Display aspect ratio : 4:3
 Frame rate : 29.970 fps
 Standard : NTSC
 Color space : YUV
 Chroma subsampling : 4:2:0
 Bit depth : 8 bits
 Scan type : Interlaced
 Scan order : Top Field First
 Compression mode : Lossy
 Bits/(Pixel*Frame) : 0.441

 The video was played through Dragon Player and the video signal has
 exited through a mini-VGA port defined as 640x480 and passed through a
 VGA-S-Video converter to an s-video cable.

 The cable has in turn been connected in turn to a Dazzle DVC100, an
 EasyCap stk1160 and a Hauppauge ImapctVCB-e.

 Each setting (eg brightness and contrast etc) has as near as possible to
 mid-range and a screengrab taken.

 The results are shown here

Re: Comparisons of images between Dazzle DVC100, EasyCap stk1160 and Hauppauge ImapctVCB-e in Linux.

2014-04-25 Thread Ezequiel Garcia
On Apr 25, Steve Cookson wrote:
[..]
 
 If the DVC100 and ImpactVCB-e had had the same love and attention that
 Ezequial has shown the EasyCap would they outperform it?
 

I really appreciate the kind words and I'm happy to see the driver is being
used and works well.

However, to be fair, the stk1160 is just a little link in a larger chain.
The video decoder is controlled through another driver (saa7115) and so
I've little merit in the image quality.

[..]
 
 Otherwise I should just focus on EasyCap for my raw SD capture and move on.
 

Hm.. hard to say. Easycap stk1160 devices are hard to get, it seems they are
manufactured in a few different hardware flavors (but with the same case)
and it's hard to tell what flavor you buy.

Right now, I think we've supported the two stk1160 cases, but there's a
third Easycap variant currently unsupported (which doesn't use stk1160
but a Somagic chipset).

In addition, given the stk1160 was partly reverse-engineered, partly
based on a non-public and very laconic datasheet, it's not easy to fix
given the lack of details about the chip.

In conclusion, if you pick stk1160, make sure you buy enough Easycap devices
(from the same provider) and do some extra effort to test *each* of them to
ensure they work as you expect.
-- 
Ezequiel García, Free Electrons
Embedded Linux, Kernel and Android Engineering
http://free-electrons.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Comparisons of images between Dazzle DVC100, EasyCap stk1160 and Hauppauge ImapctVCB-e in Linux.

2014-04-23 Thread Steve Cookson

Hi Guys,

I would be interested in your views of the comparisons of these images.  
The still is the image of a duodenum taken during an endoscopy and 
recorded to a DVD player (via an s-video or composite cable).  Although 
the endoscope is an HD endoscope, the DVD recorder isn't and the 
resulting video is 720x480i59.94.


Here are further details of the video:-

Format : MPEG Video v2
Format profile : Main@Main
Format settings, BVOP : Yes, Matrix : Custom, GOP : M=3, N=15
Bit rate mode : Variable
Bit rate : 4 566 Kbps
Maximum bit rate : 10 000 Kbps
Width : 720 pixels
Height : 480 pixels
Display aspect ratio : 4:3
Frame rate : 29.970 fps
Standard : NTSC
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Interlaced
Scan order : Top Field First
Compression mode : Lossy
Bits/(Pixel*Frame) : 0.441

The video was played through Dragon Player and the video signal has 
exited through a mini-VGA port defined as 640x480 and passed through a 
VGA-S-Video converter to an s-video cable.


The cable has in turn been connected in turn to a Dazzle DVC100, an 
EasyCap stk1160 and a Hauppauge ImapctVCB-e.


Each setting (eg brightness and contrast etc) has as near as possible to 
mid-range and a screengrab taken.


The results are shown here:

Original: 
http://tinypic.com/usermedia.php?uo=fNkd6hpTbcMrgmD6gSf74Ih4l5k2TGxc


Dazzle DVC100: 
http://tinypic.com/usermedia.php?uo=fNkd6hpTbcMaOf4QTsIefYh4l5k2TGxc


ImpactVCB-e: 
http://tinypic.com/usermedia.php?uo=fNkd6hpTbcM7i72IqGujuIh4l5k2TGxc


STK1160: 
http://tinypic.com/usermedia.php?uo=fNkd6hpTbcPO7kmQk/IS94h4l5k2TGxc


I would be grateful for your views on the quality of the images.

Is one of materially higher quality than the others, or can I adjust the 
settings to improve the quality of one of them more.


It seems to me that the Hauppauge is marginally better than the others.  
What do you think?


Can I improve the test?

Regards

Steve.



--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v4 05/14] v4l: ti-vpe: Allow usage of smaller images

2014-03-13 Thread Archit Taneja
The minimum width and height for VPE input/output was kept as 128 pixels. VPE
doesn't have a constraint on the image height, it requires the image width to
be at least 16 bytes.

Change the minimum supported dimensions to 32x32. This allows us to de-interlace
qcif content. A smaller image size than 32x32 didn't make much sense, so stopped
at this.

Reviewed-by: Hans Verkuil hans.verk...@cisco.com
Signed-off-by: Archit Taneja arc...@ti.com
---
 drivers/media/platform/ti-vpe/vpe.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/media/platform/ti-vpe/vpe.c 
b/drivers/media/platform/ti-vpe/vpe.c
index 0e7573a..dbdc338 100644
--- a/drivers/media/platform/ti-vpe/vpe.c
+++ b/drivers/media/platform/ti-vpe/vpe.c
@@ -49,8 +49,8 @@
 #define VPE_MODULE_NAME vpe
 
 /* minimum and maximum frame sizes */
-#define MIN_W  128
-#define MIN_H  128
+#define MIN_W  32
+#define MIN_H  32
 #define MAX_W  1920
 #define MAX_H  1080
 
-- 
1.8.3.2

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 05/14] v4l: ti-vpe: Allow usage of smaller images

2014-03-11 Thread Archit Taneja
The minimum width and height for VPE input/output was kept as 128 pixels. VPE
doesn't have a constraint on the image height, it requires the image width to
be at least 16 bytes.

Change the minimum supported dimensions to 32x32. This allows us to de-interlace
qcif content. A smaller image size than 32x32 didn't make much sense, so stopped
at this.

Signed-off-by: Archit Taneja arc...@ti.com
---
 drivers/media/platform/ti-vpe/vpe.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/media/platform/ti-vpe/vpe.c 
b/drivers/media/platform/ti-vpe/vpe.c
index 0e7573a..dbdc338 100644
--- a/drivers/media/platform/ti-vpe/vpe.c
+++ b/drivers/media/platform/ti-vpe/vpe.c
@@ -49,8 +49,8 @@
 #define VPE_MODULE_NAME vpe
 
 /* minimum and maximum frame sizes */
-#define MIN_W  128
-#define MIN_H  128
+#define MIN_W  32
+#define MIN_H  32
 #define MAX_W  1920
 #define MAX_H  1080
 
-- 
1.8.3.2

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 05/14] v4l: ti-vpe: Allow usage of smaller images

2014-03-11 Thread Hans Verkuil
On 03/11/14 09:33, Archit Taneja wrote:
 The minimum width and height for VPE input/output was kept as 128 pixels. VPE
 doesn't have a constraint on the image height, it requires the image width to
 be at least 16 bytes.
 
 Change the minimum supported dimensions to 32x32. This allows us to 
 de-interlace
 qcif content. A smaller image size than 32x32 didn't make much sense, so 
 stopped
 at this.
 
 Signed-off-by: Archit Taneja arc...@ti.com

Reviewed-by: Hans Verkuil hans.verk...@cisco.com

 ---
  drivers/media/platform/ti-vpe/vpe.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)
 
 diff --git a/drivers/media/platform/ti-vpe/vpe.c 
 b/drivers/media/platform/ti-vpe/vpe.c
 index 0e7573a..dbdc338 100644
 --- a/drivers/media/platform/ti-vpe/vpe.c
 +++ b/drivers/media/platform/ti-vpe/vpe.c
 @@ -49,8 +49,8 @@
  #define VPE_MODULE_NAME vpe
  
  /* minimum and maximum frame sizes */
 -#define MIN_W128
 -#define MIN_H128
 +#define MIN_W32
 +#define MIN_H32
  #define MAX_W1920
  #define MAX_H1080
  
 

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 5/7] v4l: ti-vpe: Allow usage of smaller images

2014-03-04 Thread Archit Taneja
The minimum width and height for VPE input/output was kept as 128 pixels. VPE
doesn't have a constraint on the image height, it requires the image width to
be at least 16 bytes.

Change the minimum supported dimensions to 32x32. This allows us to de-interlace
qcif content. A smaller image size than 32x32 didn't make much sense, so stopped
at this.

Signed-off-by: Archit Taneja arc...@ti.com
---
 drivers/media/platform/ti-vpe/vpe.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/media/platform/ti-vpe/vpe.c 
b/drivers/media/platform/ti-vpe/vpe.c
index 915029b..3a610a6 100644
--- a/drivers/media/platform/ti-vpe/vpe.c
+++ b/drivers/media/platform/ti-vpe/vpe.c
@@ -49,8 +49,8 @@
 #define VPE_MODULE_NAME vpe
 
 /* minimum and maximum frame sizes */
-#define MIN_W  128
-#define MIN_H  128
+#define MIN_W  32
+#define MIN_H  32
 #define MAX_W  1920
 #define MAX_H  1080
 
-- 
1.8.3.2

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH 5/7] v4l: ti-vpe: Allow usage of smaller images

2014-03-03 Thread Kamil Debski
Hi Archit,

 From: Archit Taneja [mailto:arc...@ti.com]
 Sent: Monday, March 03, 2014 8:33 AM
 
 The minimum width and height for VPE input/output was kept as 128
 pixels. VPE doesn't have a constraint on the image height, it requires
 the image width to be atleast 16 bytes.

16 bytes - shouldn't it be pixels? (also at least :) ) 
I can correct it when applying the patch if the above is the case.
 
 Change the minimum supported dimensions to 32x32. This allows us to de-
 interlace qcif content. A smaller image size than 32x32 didn't make
 much sense, so stopped at this.
 
 Signed-off-by: Archit Taneja arc...@ti.com

Best wishes,
-- 
Kamil Debski
Samsung RD Institute Poland


 ---
  drivers/media/platform/ti-vpe/vpe.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)
 
 diff --git a/drivers/media/platform/ti-vpe/vpe.c
 b/drivers/media/platform/ti-vpe/vpe.c
 index 915029b..3a610a6 100644
 --- a/drivers/media/platform/ti-vpe/vpe.c
 +++ b/drivers/media/platform/ti-vpe/vpe.c
 @@ -49,8 +49,8 @@
  #define VPE_MODULE_NAME vpe
 
  /* minimum and maximum frame sizes */
 -#define MIN_W128
 -#define MIN_H128
 +#define MIN_W32
 +#define MIN_H32
  #define MAX_W1920
  #define MAX_H1080
 
 --
 1.8.3.2

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/7] v4l: ti-vpe: Allow usage of smaller images

2014-03-03 Thread Archit Taneja

Hi,

On Monday 03 March 2014 05:44 PM, Kamil Debski wrote:

Hi Archit,


From: Archit Taneja [mailto:arc...@ti.com]
Sent: Monday, March 03, 2014 8:33 AM

The minimum width and height for VPE input/output was kept as 128
pixels. VPE doesn't have a constraint on the image height, it requires
the image width to be atleast 16 bytes.


16 bytes - shouldn't it be pixels? (also at least :) )
I can correct it when applying the patch if the above is the case.


VPDMA IP requires the line stride of the input/output images to be at 
least 16 bytes in total.


This can safely result in a minimum width of 16 pixels(NV12 has the 
least bpp with 1 byte per pixel). But we don't really have any need to 
support such small image resolutions. So I picked up 32x32 as the 
minimum size, and tested VPE with that.


Thanks,
Archit

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 5/7] v4l: ti-vpe: Allow usage of smaller images

2014-03-02 Thread Archit Taneja
The minimum width and height for VPE input/output was kept as 128 pixels. VPE
doesn't have a constraint on the image height, it requires the image width to
be atleast 16 bytes.

Change the minimum supported dimensions to 32x32. This allows us to de-interlace
qcif content. A smaller image size than 32x32 didn't make much sense, so stopped
at this.

Signed-off-by: Archit Taneja arc...@ti.com
---
 drivers/media/platform/ti-vpe/vpe.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/media/platform/ti-vpe/vpe.c 
b/drivers/media/platform/ti-vpe/vpe.c
index 915029b..3a610a6 100644
--- a/drivers/media/platform/ti-vpe/vpe.c
+++ b/drivers/media/platform/ti-vpe/vpe.c
@@ -49,8 +49,8 @@
 #define VPE_MODULE_NAME vpe
 
 /* minimum and maximum frame sizes */
-#define MIN_W  128
-#define MIN_H  128
+#define MIN_W  32
+#define MIN_H  32
 #define MAX_W  1920
 #define MAX_H  1080
 
-- 
1.8.3.2

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


FW: problems with larger raw video images

2013-12-12 Thread Brown Patrick-PMH786
 Hans,
 My name is pat brown. I work for motorola solutions.
 I am using a 8 megapixel camera and the lib4vl2 library is giving me an error:

 libv4l2: error converting / decoding frame data: v4l-convert: error 
destination buffer too small (16777216  23970816)
 VIDIOC_DQBUF: Bad address

 v4l-utils-HEAD-4dea4af/lib/libv4l2/libv4l2-priv.h

 I have tracked it down to the following:
 /lib/libv4l2/libv4l2-priv.h:#define V4L2_FRAME_BUF_SIZE (4096 * 4096)

 When I changed the value to :
 /lib/libv4l2/libv4l2-priv.h:#define V4L2_FRAME_BUF_SIZE (2 * 4096 * 4096)

 How can this change be properly integrated. This problem impacts the opencv 
software package.
 Pat,
 631 880 1188

From: Hans Verkuil hansv...@cisco.com
Sent: Thursday, December 12, 2013 1:42 AM
To: Brown Patrick-PMH786
Cc: Seiter Paul-VNJW48; Clayton Mark-AMC036; Super Boaz-SUPER1
Subject: Re: problems with larger raw video images

Hi Pat!

The best approach is to post it to the linux-media mailinglist
(linux-media@vger.kernel.org, no need to subscribe but make sure your
email is normal ascii and not HTML since those are rejected AFAIK)
and Cc Hans de Goede (hdego...@redhat.com) since he maintains
that code.

Regards,

Hans

On 12/11/13 21:29, Brown Patrick-PMH786 wrote:
 Hans,
 My name is pat brown. I work for motorola solutions.
 I am using a 8 megapixel camera and the lib4vl2 library is giving me an error:

 libv4l2: error converting / decoding frame data: v4l-convert: error 
 destination buffer too small (16777216  23970816)
 VIDIOC_DQBUF: Bad address

 v4l-utils-HEAD-4dea4af/lib/libv4l2/libv4l2-priv.h

 I have tracked it down to the following:
 /lib/libv4l2/libv4l2-priv.h:#define V4L2_FRAME_BUF_SIZE (4096 * 4096)

 When I changed the value to :
 /lib/libv4l2/libv4l2-priv.h:#define V4L2_FRAME_BUF_SIZE (2 * 4096 * 4096)

 How can this change be properly integrated. This problem impacts the opencv 
 software package.
 Pat,
 631 880 1188





--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: width and height of JPEG compressed images

2013-09-01 Thread Laurent Pinchart
Hi Sylwester,

On Sunday 01 September 2013 00:25:56 Sylwester Nawrocki wrote:
 Hi Thomas,
 
 I sincerely apologise for not having replied earlier. Looks like I'm
 being pulled in too many directions. :/
 
 On 08/06/2013 06:26 PM, Thomas Vajzovic wrote:
  On 24 July 2013 10:30 Sylwester Nawrocki wrote:
  On 07/22/2013 10:40 AM, Thomas Vajzovic wrote:
  On 21 July 2013 21:38 Sylwester Nawrocki wrote:
  On 07/19/2013 10:28 PM, Sakari Ailus wrote:
  On Sat, Jul 06, 2013 at 09:58:23PM +0200, Sylwester Nawrocki wrote:
  On 07/05/2013 10:22 AM, Thomas Vajzovic wrote:
  The hardware reads AxB sensor pixels from its array, resamples
  them to CxD image pixels, and then compresses them to ExF bytes.
  
  If the sensor driver is only told the user's requested sizeimage, it
  can be made to factorize (ExF) into (E,F) itself, but then both the
  parallel interface and the 2D DMA peripheral need to be told the
  particular factorization that it has chosen.
  
  If the user requests sizeimage which cannot be satisfied (eg: a prime
  number) then it will need to return (E,F) to the bridge driver which
  does not multiply exactly to sizeimage.  Because of this the bridge
  driver must set the corrected value of sizeimage which it returns to
  userspace to the product ExF.
  
  Ok, let's consider following data structure describing the frame:
  
  struct v4l2_frame_desc_entry {
 u32 flags;
 u32 pixelcode;
 u32 samples_per_line;
 u32 num_lines;
 u32 size;
  };
  
  I think we could treat the frame descriptor to be at lower lever in
  the protocol stack than struct v4l2_mbus_framefmt.
  
  Then the bridge would set size and pixelcode and the subdev would
  return (E, F) in (samples_per_frame, num_lines) and adjust size if
  required. Number of bits per sample can be determined by pixelcode.
  
  It needs to be considered that for some sensor drivers it might not
  be immediately clear what samples_per_line, num_lines values are.
  In such case those fields could be left zeroed and bridge driver
  could signal such condition as a more or less critical error. In
  end of the day specific sensor driver would need to be updated to
  interwork with a bridge that requires samples_per_line, num_lines.
  
  I think we ought to try to consider the four cases:
  
  1D sensor and 1D bridge: already works
  
  2D sensor and 2D bridge: my use case
  
  1D sensor and 2D bridge, 2D sensor and 1D bridge:
  Perhaps both of these cases could be made to work by setting:
  num_lines = 1; samples_per_line = ((size * 8) / bpp);
  
  (Obviously this would also require the appropriate pull-up/down
  on the second sync input on a 2D bridge).
 
 So to determine when this has to be done, e.g. we could see that either
 num_lines or samples_per_line == 1 ?
 
  Since the frame descriptor interface is still new and used in so
  few drivers, is it reasonable to expect them all to be fixed to
  do this?
 
 Certainly, I'll be happy to rework those drivers to whatever the
 re-designed API, as long as it supports the current functionality.
 
  Not sure if we need to add image width and height in pixels to the
  above structure. It wouldn't make much sensor when single frame
  carries multiple images, e.g. interleaved YUV and compressed image
  data at different resolutions.
  
  If image size were here then we are duplicating get_fmt/set_fmt.
  But then, by having pixelcode here we are already duplicating part
  of get_fmt/set_fmt.  If the bridge changes pixelcode and calls
  set_frame_desc then is this equivalent to calling set_fmt?
  I would like to see as much data normalization as possible and
  eliminate the redundancy.
 
 Perhaps we could replace pixelcode in the above struct by something
 else that would have conveyed required data. But I'm not sure what it
 would have been. Perhaps just bits_per_sample for starters ?
 
 The frame descriptors were also supposed to cover interleaved image data.
 Then we need something like pixelcode (MIPI CSI-2 Data Type) in each
 frame_desc entry.
 
 Not sure if it would have been sensible to put some of the above
 information into struct v4l2_mbus_frame_desc rather than to struct
 v4l2_mbus_frame_desc_entry.
 
  Whatever mechanism is chosen needs to have corresponding get/set/try
  methods to be used when the user calls
  VIDIOC_G_FMT/VIDIOC_S_FMT/VIDIOC_TRY_FMT.
  
  Agreed, it seems we need some sort of negotiation of those low
  level parameters.
  
  Should there be set/get/try function pointers, or should the struct
  include an enum member like v4l2_subdev_format.which to determine
  which operation is to be perfomed?
  
  Personally I think that it is a bit ugly having two different
  function pointers for set_fmt/get_fmt but then a structure member
  to determine between set/try.  IMHO it should be three function
  pointers or one function with a three valued enum in the struct.
 
 I'm fine either way, with a slight preference for three separate callbacks.

Don't forget that the reason

Re: width and height of JPEG compressed images

2013-08-31 Thread Sylwester Nawrocki

Hi Thomas,

I sincerely apologise for not having replied earlier. Looks like I'm
being pulled in too many directions. :/

On 08/06/2013 06:26 PM, Thomas Vajzovic wrote:

On 24 July 2013 10:30 Sylwester Nawrocki wrote:

On 07/22/2013 10:40 AM, Thomas Vajzovic wrote:

On 21 July 2013 21:38 Sylwester Nawrocki wrote:

On 07/19/2013 10:28 PM, Sakari Ailus wrote:

On Sat, Jul 06, 2013 at 09:58:23PM +0200, Sylwester Nawrocki wrote:

On 07/05/2013 10:22 AM, Thomas Vajzovic wrote:


The hardware reads AxB sensor pixels from its array, resamples
them to CxD image pixels, and then compresses them to ExF bytes.


If the sensor driver is only told the user's requested sizeimage, it
can be made to factorize (ExF) into (E,F) itself, but then both the
parallel interface and the 2D DMA peripheral need to be told the
particular factorization that it has chosen.

If the user requests sizeimage which cannot be satisfied (eg: a prime
number) then it will need to return (E,F) to the bridge driver which
does not multiply exactly to sizeimage.  Because of this the bridge
driver must set the corrected value of sizeimage which it returns to
userspace to the product ExF.


Ok, let's consider following data structure describing the frame:

struct v4l2_frame_desc_entry {
   u32 flags;
   u32 pixelcode;
   u32 samples_per_line;
   u32 num_lines;
   u32 size;
};

I think we could treat the frame descriptor to be at lower lever in
the protocol stack than struct v4l2_mbus_framefmt.

Then the bridge would set size and pixelcode and the subdev would
return (E, F) in (samples_per_frame, num_lines) and adjust size if
required. Number of bits per sample can be determined by pixelcode.

It needs to be considered that for some sensor drivers it might not
be immediately clear what samples_per_line, num_lines values are.
In such case those fields could be left zeroed and bridge driver
could signal such condition as a more or less critical error. In
end of the day specific sensor driver would need to be updated to
interwork with a bridge that requires samples_per_line, num_lines.


I think we ought to try to consider the four cases:

1D sensor and 1D bridge: already works

2D sensor and 2D bridge: my use case

1D sensor and 2D bridge, 2D sensor and 1D bridge:
Perhaps both of these cases could be made to work by setting:
num_lines = 1; samples_per_line = ((size * 8) / bpp);

(Obviously this would also require the appropriate pull-up/down
on the second sync input on a 2D bridge).


So to determine when this has to be done, e.g. we could see that either
num_lines or samples_per_line == 1 ?


Since the frame descriptor interface is still new and used in so
few drivers, is it reasonable to expect them all to be fixed to
do this?


Certainly, I'll be happy to rework those drivers to whatever the
re-designed API, as long as it supports the current functionality.


Not sure if we need to add image width and height in pixels to the
above structure. It wouldn't make much sensor when single frame
carries multiple images, e.g. interleaved YUV and compressed image
data at different resolutions.


If image size were here then we are duplicating get_fmt/set_fmt.
But then, by having pixelcode here we are already duplicating part
of get_fmt/set_fmt.  If the bridge changes pixelcode and calls
set_frame_desc then is this equivalent to calling set_fmt?
I would like to see as much data normalization as possible and
eliminate the redundancy.


Perhaps we could replace pixelcode in the above struct by something
else that would have conveyed required data. But I'm not sure what it
would have been. Perhaps just bits_per_sample for starters ?

The frame descriptors were also supposed to cover interleaved image data.
Then we need something like pixelcode (MIPI CSI-2 Data Type) in each
frame_desc entry.

Not sure if it would have been sensible to put some of the above
information into struct v4l2_mbus_frame_desc rather than to struct
v4l2_mbus_frame_desc_entry.


Whatever mechanism is chosen needs to have corresponding get/set/try
methods to be used when the user calls
VIDIOC_G_FMT/VIDIOC_S_FMT/VIDIOC_TRY_FMT.


Agreed, it seems we need some sort of negotiation of those low
level parameters.


Should there be set/get/try function pointers, or should the struct
include an enum member like v4l2_subdev_format.which to determine
which operation is to be perfomed?

Personally I think that it is a bit ugly having two different
function pointers for set_fmt/get_fmt but then a structure member
to determine between set/try.  IMHO it should be three function
pointers or one function with a three valued enum in the struct.


I'm fine either way, with a slight preference for three separate callbacks.

WRT to making frame_desc read-only, subdevices for which it doesn't make
sense to set anything could always adjust passed data to their fixed or
depending on other calls, like the pad level set_fmt op, values. And we
seem to have use cases already for at least try_frame_desc.

--
Regards

RE: width and height of JPEG compressed images

2013-08-22 Thread Thomas Vajzovic
Hi,

On 21 August 2013 14:29, Laurent Pinchart wrote:
 On Wednesday 21 August 2013 16:17:37 Sakari Ailus wrote:
 On Wed, Aug 07, 2013 at 05:43:56PM +, Thomas Vajzovic wrote:
 It defines the exact size of the physical frame.  The JPEG data is
 padded to this size. The size of the JPEG before it was padded is
 also written into the last word of the physical frame.

That would require either using a custom pixel format and have userspace
reading the size from the buffer, or mapping the buffer in kernel space
and reading the size there. The latter is easier for userspace, but
might it hinder performances ?

I think it ought to be a custom format and handled in userspace,
otherwise the bridge driver would have to call a subdev function
each frame to get it to fix-up the used size each time, which is
quite ugly.

Regards,
Tom

--
Mr T. Vajzovic
Software Engineer
Infrared Integrated Systems Ltd
Visit us at www.irisys.co.uk
Disclaimer: This e-mail message is confidential and for use by the addressee 
only. If the message is received by anyone other than the addressee, please 
return the message to the sender by replying to it and then delete the original 
message and the sent message from your computer. Infrared Integrated Systems 
Limited Park Circle Tithe Barn Way Swan Valley Northampton NN4 9BG Registration 
Number: 3186364.
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: width and height of JPEG compressed images

2013-08-22 Thread Thomas Vajzovic
Hi,

On 21 August 2013 14:34, Sakari Ailus wrote:
 On Tue, Aug 06, 2013 at 04:26:56PM +, Thomas Vajzovic wrote:
 On 24 July 2013 10:30 Sylwester Nawrocki wrote:
 On 07/22/2013 10:40 AM, Thomas Vajzovic wrote:
 On 21 July 2013 21:38 Sylwester Nawrocki wrote:
 On 07/19/2013 10:28 PM, Sakari Ailus wrote:
 On Sat, Jul 06, 2013 at 09:58:23PM +0200, Sylwester Nawrocki wrote:
 On 07/05/2013 10:22 AM, Thomas Vajzovic wrote:

 The hardware reads AxB sensor pixels from its array, resamples
 them to CxD image pixels, and then compresses them to ExF bytes.

 If the sensor driver is only told the user's requested sizeimage,
 it can be made to factorize (ExF) into (E,F) itself, but then both
 the parallel interface and the 2D DMA peripheral need to be told
 the particular factorization that it has chosen.

 If the user requests sizeimage which cannot be satisfied (eg: a
 prime
 number) then it will need to return (E,F) to the bridge driver
 which does not multiply exactly to sizeimage.  Because of this the
 bridge driver must set the corrected value of sizeimage which it
 returns to userspace to the product ExF.

 Ok, let's consider following data structure describing the frame:

 struct v4l2_frame_desc_entry {
   u32 flags;
   u32 pixelcode;
   u32 samples_per_line;
   u32 num_lines;
   u32 size;
 };

 I think we could treat the frame descriptor to be at lower lever in
 the protocol stack than struct v4l2_mbus_framefmt.

 Then the bridge would set size and pixelcode and the subdev would
 return (E, F) in (samples_per_frame, num_lines) and adjust size if
 required. Number of bits per sample can be determined by pixelcode.

 It needs to be considered that for some sensor drivers it might not
 be immediately clear what samples_per_line, num_lines values are.
 In such case those fields could be left zeroed and bridge driver
 could signal such condition as a more or less critical error. In end
 of the day specific sensor driver would need to be updated to
 interwork with a bridge that requires samples_per_line, num_lines.

 I think we ought to try to consider the four cases:

 1D sensor and 1D bridge: already works

 2D sensor and 2D bridge: my use case

 1D sensor and 2D bridge, 2D sensor and 1D bridge:

 Are there any bridge devices that CANNOT receive 2D images? I've
 never seen any.

I meant bridge with 1D DMA.

 Perhaps both of these cases could be made to work by setting:
 num_lines = 1; samples_per_line = ((size * 8) / bpp);

 (Obviously this would also require the appropriate pull-up/down on the
 second sync input on a 2D bridge).

 And typically also 2D-only bridges have very limited maximum image
 width which is unsuitable for any decent images. I'd rather like to
 only support cases that we actually have right now.

That makes sense.  I would make a small change though:

I think your proposed structure and protocol has redundant data
which could lead to ambiguity.

Perhaps the structure should only have size and samples_per_line.
If the subdev supports 2D output of a compressed stream then it examines
size, and sets samples_per_line and adjusts size.  If not then it
may still adjust size but leaves samples_per_line zeroed.  As you said
if the bridge finds samples_per_line still zeroed and it needs it then
it will have to give up.  If it has a non-zero samples_per_line then it
can divide to find num_lines.

 Not sure if we need to add image width and height in pixels to the
 above structure. It wouldn't make much sensor when single frame
 carries multiple images, e.g. interleaved YUV and compressed image
 data at different resolutions.

 If image size were here then we are duplicating get_fmt/set_fmt.
 But then, by having pixelcode here we are already duplicating part of
 get_fmt/set_fmt.  If the bridge changes pixelcode and calls

 Pixelcode would be required to tell which other kind of data is
 produced by the device. But I agree in principle --- there could
 (theoretically) be multiple pixelcodes that you might want to
 configure on a sensor. We don't have a way to express that currently.

I wasn't thinking that set_frame_desc should be able to configure
currently unselected pixelcodes, quite the contrary, I would expect
that the pad should have a selected pixelcode, set by set_mbus_fmt,
so having pixelcode in frame_desc_entry is extra duplication, I don't
know why it is there.

 Do you have an example of something you'd like to set (or try) in frame
 descriptors outside struct v4l2_subdev_format?

I only have a need to try/set the buffersize which is tried/set by
userspace.


Best regards,
Tom

--
Mr T. Vajzovic
Software Engineer
Infrared Integrated Systems Ltd
Visit us at www.irisys.co.uk
Disclaimer: This e-mail message is confidential and for use by the addressee 
only. If the message is received by anyone other than the addressee, please 
return the message to the sender by replying to it and then delete the original 
message and the sent message from your computer. Infrared Integrated Systems 
Limited

  1   2   >