Thank you all for your comments.
Marcus, I'd just like to say that I have been using the packet encoder/decoder
for a long time (3-4 years) and could not see any issues that could be
attributed to it.
Regarding the dropped samples, is it the case that the buffers are continuously
grown as the
I second what Michael wrote, but I'd like to be more general:
GNU Radio does NOT drop samples, anywhere. SDR or audio or similar
analog/digital hardware might do that when buffers run over.
There's a long-standing, and seemingly unfixable bug in the
packet_encoder/decoder Python hier blocks that
Hi Adrian - If you use a file source with a throttle, then that section of
your flowgraph will not drop samples. Using what Kevin wrote: The file
source will "work" as fast as possible, so its output buffer will fill up
quickly and any time samples are removed from it & so long as there is file
Hi Kevin,
Thanks for the explanation, I think my flawed understading was due to
the fact of having a file source with a throttle block, and seeing
samples being dropped from buffers that did not match ASCII bytes lost
at the file source but somewhere along the way. Is it correct to
presume that
On Sun, Sep 1, 2019 at 3:15 AM Adrian Musceac wrote:
> I'd like to know what happens if the block's work function does not finish
> within the time allotted to it.
> Is part of the input/output buffer dropped, or does it interfere with the
> sample rate of the next blocks?
>
GNU Radio's flow
Hi,
I found this thread very useful, and I have another question regarding the
scheduler internals.
It is my understanding from reading documentation that each block's *work*
or *general_work* function
will be called at the highest sample rate that the block will see, whether
that is on the input
If you use set_output_multiple(), you don't have to check the input
buffer. The block will only execute if there are a multiple of the value
used in set_output_multiple() items available. For example, if
set_output_multiple() is set to 256, the block will only execute if
noutput_items is at
Thank you very much Marcus, Michael, Abin, and Ron, really appreciate your
responses.
To give some context, I just started designing a prototype reader to
implement a custom protocol for backscatter neural implants; very excited
to build my platform with GNU-radio :)
After reading all the
Hi Ron,
just because I think this might be interesting to people dealing with
really high rates:
The maximum size is typically limited by the size of mmap'able memory
that you can allocate; that depends on the circular buffer factory
used:
For the posix shared memory thing, I don't think
Hi Laura - All of what's written already is basically correct. I'll add on
minor tweak: "ninput_items" and "noutput_items", which are 2 of the
arguments to the "work" or "general_work" method, define the maximum number
of input and/or output items available to use or requested to produce; see
the
Just to put a number on this question, the DVB-T2 transmitter uses up to
16 Megabyte buffers between blocks. I'm not sure what the absolute
maximum is, but 16 Megabytes should cover most applications.
The DVB-T2 blocks use set_output_multiple() in conjunction with
forecast() to allocate these
Laura et al,
consume_each <= ninput_items.
If you need a larger buffer than you consume you can abuse the
scheduler slightly by set_relative_rate(1, some_min_input_items), this
is done in the stream to vector blocks for example. I do this in a
visualization sink where I need to produce FFTs with
Hi Laura,
the buffer sizes are determined at flow graph startup based on the
involved block's io signature, output multiples, alignment
requirements, minimum and maximum buffer sizes, and the page size
(which practically everywhere is 4kB).
I think you're expecting the amount of items your block
Thank you very much Michael.
What are the I/O buffer sizes?
* My block is a general block
* In my forecast:* ninput_items_required[0] = noutput_items;*
* In the general_work: const gr_complex *in = (const gr_complex *)
input_items[0];
*float** *out =
Hi Laura - In the "work" or "general_work" method, there are arguments that
contain the information you're looking for. These arguments are set
depending on what type of block you're creating (source, sink, sync,
tagged_stream, whatever), are influenced by what the "forecast()" method
returns, and
Hello GNURadio community,
Does anyone know what is the maximum number of input items that an Out Of
Tree block can consume on each input stream?
consume_each(consumed) --> what is the maximum value that the variable
consumed can take?
Thank you very much.
--
*Laura Arjona *
Washington
16 matches
Mail list logo