On Tue, Feb 17, 2009 at 5:15 PM, James Snyder <jbsnyder at fanplastic.org>wrote:
> I'm not sure, actually. I could try always returning a table whether you're > getting one value or 100. > Ah, I didn't notive this before. Dado's right, let's keep the result type consistent. Which means that, for now, let's just make our sample function accept a table as a parameter and return all its data (be it 1 or 100 samples) in that table. Best, Bogdan > ----- "Dado Sutter" <dadosutter at gmail.com> wrote: > > Hello, > > > > > On Tue, Feb 17, 2009 at 10:53, James Snyder <jbsnyder at fanplastic.org>wrote: > > >> >> > > >> > ----- "Bogdan Marinescu" <bogdan.marinescu at gmail.com> wrote: >> > >............. >> > >> > I suppose one could also pass an existing table to getsamples, and get >> it returned with the results in it? >> > > > I would also prefer that the functions return the same type on all cases > (so a table passed as a parameter would do just fine for both issues). > > Do you think that the small overhead caused by the table manipulation > (instead of an number) justify the returning of a diferent type (a number) > for critical speed sampling apps ? > > > > > >> > > > Best, >> > > Bogdan >> > > > Best > > Dado > > > > > > > > > > > >> > > >> > > >> >>> > >>> > >>> > > > On Feb 16, 2009, at 11:15 AM, James Snyder wrote: >>> >>> > > >>> >>> > > > Hi - >>> > > >>> > > Thanks for the comments :-) >>> > > >>> > >>> > > ----- "Bogdan Marinescu" <bogdan.marinescu at gmail.com> wrote: >>> > >>> > > > Hi, >>> > > > >>> > > > I still have to look at the code carefully and figure out what >>> exactly you did there :), but for now a few simple observations: >>> > > > >>> > > > 1. it occured to me that since buf_init already expects a >>> logarithmic parameter, it probably makes sense to make it expect two >>> logarithmic parameters, so instead of this: >>> > > > >>> > > > int buf_set( unsigned resid, unsigned resnum, u8 logsize, size_t >>> dsize ) >>> > > > >>> > > > we'll have this: >>> > > > >>> > > > int buf_set( unsigned resid, unsigned resnum, u8 logsize, size_t * >>> log*dsize ) >>> > > >>> > > I was thinking about doing this, I'll make the change. >>> > > >>> > > > >>> > > > 2. Since you did this: >>> > > > >>> > > > pbuf->logsize = logsize + ( pbuf->logdsize ); >>> > > > >>> > > > in buf_set, you probably need to modify this: >>> > > > >>> > > > #define BUF_MOD_INCR( p, m ) p->m = ( p->m + ( ( u16 )1 << >>> p->logdsize ) ) & ( ( ( u16 )1 << ( p->logsize + p->logdsize) ) - 1 ) >>> > > > >>> > > > (because you add logdsize to logsize once again, and I don't think >>> this is right). >>> > > >>> > > Ooops. That would likely be the cause of some of the random crashes >>> I was seeing :-) (currently worked around somewhat) >>> > > >>> > > > >>> > > > 3. The data size of an ADC is not always 16 bits, so we should add >>> another (probably also logarithmic) parameter to elua_adc/adc_init_state. >>> > > >>> > >>> > > True.. Are you anticipating use of higher or lower bit depth? There >>> are certainly lower and higher ones out there, though I've not seen >16-bit >>> ones built-in to uCs. If we want to accomodate larger sizes, the return >>> types for some things will need adjustment, perhaps by defining a type that >>> reflects the maximum size that will be returned? This type would be >>> selected at compile time depending on the maximum bits-per-sample one might >>> want to work with? >>> > > >>> > > Something like: >>> > > >>> > > #if MAX_ADC_BIT_RESOLUTION <= 8 >>> > > typedef u8 t_adc_data >>> > > #elif MAX_ADC_BIT_RESOLUTION <= 16 >>> > > typedef u16 t_adc_data >>> > > #elif MAX_ADC_BIT_RESOLUTION <= 32 >>> > > typedef u32 t_adc_data >>> > > #else >>> > > #error "No matching type for MAX_ADC_BIT_RESOLUTION, check your >>> selected bit depth or add larger type" >>> > > #endif >>> > > >>> > > > >>> > >>> > > > 4. As for the change you proposed, as I said I still have to figure >>> out what exactly your code does :), but for now it makes sense. I'll get >>> back to you with more information. Fortunately we don't really have a >>> pre-existing paradigm, we just have some proposals, so we can change >>> everything we don't like. >>> > > >>> > >>> > > OK, sounds good. The one midly complicated thing to make this >>> approach work with dynamic buffer sizing is to have buf_set handle >>> increasing buffer size gracefully. I think the main case to handle is when >>> wptr < rptr. i.e. the write pointer has wrapped around to the beginning of >>> the buffer, but the read ptr has not. If one just adds space in this case, >>> the read pointer will start going into as yet unwritten space thinking it is >>> picking up valid data. >>> > > >>> > >>> > > One way to handle this would be copying data so that the freshly >>> resized buffer is coherent again. Another would be to somehow grow the >>> buffer in the space between the rptr and the wptr. This, however, without >>> moving data around, even if it were possible, would result in fragmentation. >>> > > >>> > > If I were to just do an implementation without further research it >>> might look like this: >>> > > >>> > > 1. If wptr > rptr, just realloc. >>> > > 2. If rptr > wptr, move all of the elements between buf (array start >>> pointer) and wptr to space after the wrapping point of the original, >>> smaller, buffer. >>> > > >>> > > If we could also grow the buffer at the starting end, maybe we could >>> decide whether adding at the start or the end would result in more copying. >>> > > >>> > > I'm not as concerned about algorithms for downsizing the buffer to >>> conserve space. I think instead of dealing with copying in this case, the >>> downsizing might just be done whenever the buffer runs dry, and if no new >>> interesting requests are pending, drop down to some reasonable default size. >>> > > >>> > > Any thoughts or ideas would certainly be appreciated. >>> > > >>> > > > >>> > >>> > > > Best, >>> > > > Bogdan >>> > > > >>> > > > On Mon, Feb 16, 2009 at 3:12 AM, James Snyder < >>> jbsnyder at fanplastic.org> wrote: >>> > > > > Hi - >>> > > > > >>> > > > > I've dropped in another large ADC commit. I've mentioned most of >>> what was done in the commit message, but here's a rundown: >>> > > > > >>> > > > > - When samples are available from ADC, they're initially copied >>> into an elua buf. >>> > > > > - buf length is adjusted according to number of expected samples >>> coming in (when burst is requested, buf is resized to accomodate the number >>> of burst samples, size is dropped back down when single samples are >>> requested) >>> > > > > - if smoothing is enabled, and has no samples, smoothing buffer >>> (not an elua buf) is filled first to warm up the filter, then samples begin >>> to accumulate in the main buffer. >>> > > > > - a flush function has been added to manually clear out both >>> smoothing and primary buffers in case one doesn't want old samples or old >>> smoothing data being used for future measurements >>> > > > > >>> > > > > Also, I forgot to mention one thing in the commit message: As >>> per a discussion with Bogdan, the type checking on buf_write and buf_read >>> have been pulled out. >>> > > > > >>> > > > > One adjustment that I'd like to consider before the 0.6 freeze is >>> to remove the option for blocking and non-blocking as it applies to sample >>> and burst functions (used to initiate sampling) and to instead make these >>> always non-blocking, and never have them return any samples (only errors, if >>> needed). A separate function, say getsamples would pull in data collected >>> using either mode. Right now, if one uses non-blocking mode, samples will >>> always be returned for the last time you ran sample or burst. This means >>> that if you want to get the data already requested, you also have to always >>> request new samples, even if you don't want them. >>> > > > > >>> > > > > I should be able to make this change with minimal code changes, >>> but I haven't done it yet because it changes the pre-existing paradigm, and >>> I wanted to get these changes in sooner rather than later :-) >>> > > > > >>> > > > > I think it might just take me another hour or so to get >>> adjustments along those lines working. There wouldn't be as long of a delay >>> as this ADC commit. >>> > > > > >>> > > > > Suggestions/comments are welcome :-) >>> > > > > >>> > > > > -jsnyder >>> > > > > _______________________________________________ >>> > > > > Elua-dev mailing list >>> > > > > Elua-dev at lists.berlios.de >>> > > > > https://lists.berlios.de/mailman/listinfo/elua-dev >>> > > > > >>> > > > >>> > > > >>> > > > _______________________________________________ Elua-dev mailing >>> list Elua-dev at lists.berlios.de >>> https://lists.berlios.de/mailman/listinfo/elua-dev >>> > _______________________________________________ >>> > > Elua-dev mailing list >>> > > Elua-dev at lists.berlios.de >>> > > https://lists.berlios.de/mailman/listinfo/elua-dev >>> > > >>> >>> >>> > > >>> > > > > -- >>> James Snyder >>> Biomedical Engineering >>> Northwestern University >>> jbsnyder at fanplastic.org >>> http://fanplastic.org/key.txt >>> ph: (847) 644-2322 >>> >>> > > >>> >>> > > _______________________________________________> >>> > > Elua-dev mailing list >>> > > Elua-dev at lists.berlios.de >>> > > https://lists.berlios.de/mailman/listinfo/elua-dev >>> > > >>> > > >>> >> > >> > > >> > > _______________________________________________ Elua-dev mailing list >> Elua-dev at lists.berlios.de >> https://lists.berlios.de/mailman/listinfo/elua-dev >> >> > _______________________________________________ >> > Elua-dev mailing list >> > Elua-dev at lists.berlios.de >> > https://lists.berlios.de/mailman/listinfo/elua-dev >> > >> > > > > > > > _______________________________________________ Elua-dev mailing list > Elua-dev at lists.berlios.de > https://lists.berlios.de/mailman/listinfo/elua-dev > > _______________________________________________ > Elua-dev mailing list > Elua-dev at lists.berlios.de > https://lists.berlios.de/mailman/listinfo/elua-dev > > An HTML attachment was scrubbed... URL: https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/ef661fd5/attachment-0001.html |
OK, that is working now, and may be ready to go into revision control
soon. One other question about this behavior: If I get a table, do I nil out values that don't get new samples? i.e.: If user does the following: adc.sample(0) adc.sample(0) adc.sample(0) a = adc.getsamples(0) yeilding a table like the following: a = {56, 23, 34} and does: adc.sample(0) a = adc.getsamples(0,a) if the one new sample is 89, do I do I return this: a = {89, nil, nil} (i.e. a length 1 table) or this? a = {89, 23, 34} or this?: a = {56, 23, 34, 89} I suppose the last one is the least destructive, but it will grow it every time... They're all a little bit weird, I suppose. The other option is to have a parameter that defines whether when one uses this it uses one method or another. I don't want to bloat this function call too much though... One compromise might be to allow the parameter that follows the table to define what index in the table to start at: a = adc.getsamples(0,2,a,5) So, this would mean, get 2 samples from adc channel 0, put the results in table a, starting at index 5, and give me the table back as a result. If this were done, I'm somewhat inclined to handle cases like the following (where sample count is left off), by nil-ing any values in the array after wherever the source samples end. a = adc.getsamples(0,a,5) or a = adc.getsamples(0,a) Any thoughts? I know I've rambled a bit :-) -jsnyder On Feb 17, 2009, at 9:44 AM, Bogdan Marinescu wrote: > > > On Tue, Feb 17, 2009 at 5:15 PM, James Snyder > <jbsnyder at fanplastic.org> wrote: > I'm not sure, actually. I could try always returning a table whether > you're getting one value or 100. > > Ah, I didn't notive this before. Dado's right, let's keep the result > type consistent. Which means that, for now, let's just make our > sample function accept a table as a parameter and return all its > data (be it 1 or 100 samples) in that table. > > Best, > Bogdan > > ----- "Dado Sutter" <dadosutter at gmail.com> wrote: > > Hello, > > > > > > On Tue, Feb 17, 2009 at 10:53, James Snyder > <jbsnyder at fanplastic.org> wrote: > > > > > > > > ----- "Bogdan Marinescu" <bogdan.marinescu at gmail.com> wrote: > > >............. > > > > > I suppose one could also pass an existing table to getsamples, and > get it returned with the results in it? > > > I would also prefer that the functions return the same type on all > cases (so a table passed as a parameter would do just fine for both > issues). > > Do you think that the small overhead caused by the table > manipulation (instead of an number) justify the returning of a > diferent type (a number) for critical speed sampling apps ? > > > > > > > > > > Best, > > > Bogdan > > > Best > > Dado > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Feb 16, 2009, at 11:15 AM, James Snyder wrote: > > > > > > > > > Hi - > > > > > > Thanks for the comments :-) > > > > > > > > ----- "Bogdan Marinescu" <bogdan.marinescu at gmail.com> wrote: > > > > > > Hi, > > > > > > > > I still have to look at the code carefully and figure out what > exactly you did there :), but for now a few simple observations: > > > > > > > > 1. it occured to me that since buf_init already expects a > logarithmic parameter, it probably makes sense to make it expect two > logarithmic parameters, so instead of this: > > > > > > > > int buf_set( unsigned resid, unsigned resnum, u8 logsize, > size_t dsize ) > > > > > > > > we'll have this: > > > > > > > > int buf_set( unsigned resid, unsigned resnum, u8 logsize, > size_t logdsize ) > > > > > > I was thinking about doing this, I'll make the change. > > > > > > > > > > > 2. Since you did this: > > > > > > > > pbuf->logsize = logsize + ( pbuf->logdsize ); > > > > > > > > in buf_set, you probably need to modify this: > > > > > > > > #define BUF_MOD_INCR( p, m ) p->m = ( p->m + ( ( u16 )1 << p- > >logdsize ) ) & ( ( ( u16 )1 << ( p->logsize + p->logdsize) ) - 1 ) > > > > > > > > (because you add logdsize to logsize once again, and I don't > think this is right). > > > > > > Ooops. That would likely be the cause of some of the random > crashes I was seeing :-) (currently worked around somewhat) > > > > > > > > > > > 3. The data size of an ADC is not always 16 bits, so we should > add another (probably also logarithmic) parameter to elua_adc/ > adc_init_state. > > > > > > > > True.. Are you anticipating use of higher or lower bit depth? > There are certainly lower and higher ones out there, though I've not > seen >16-bit ones built-in to uCs. If we want to accomodate larger > sizes, the return types for some things will need adjustment, > perhaps by defining a type that reflects the maximum size that will > be returned? This type would be selected at compile time depending > on the maximum bits-per-sample one might want to work with? > > > > > > Something like: > > > > > > #if MAX_ADC_BIT_RESOLUTION <= 8 > > > typedef u8 t_adc_data > > > #elif MAX_ADC_BIT_RESOLUTION <= 16 > > > typedef u16 t_adc_data > > > #elif MAX_ADC_BIT_RESOLUTION <= 32 > > > typedef u32 t_adc_data > > > #else > > > #error "No matching type for MAX_ADC_BIT_RESOLUTION, check your > selected bit depth or add larger type" > > > #endif > > > > > > > > > > > > > 4. As for the change you proposed, as I said I still have to > figure out what exactly your code does :), but for now it makes > sense. I'll get back to you with more information. Fortunately we > don't really have a pre-existing paradigm, we just have some > proposals, so we can change everything we don't like. > > > > > > > > OK, sounds good. The one midly complicated thing to make this > approach work with dynamic buffer sizing is to have buf_set handle > increasing buffer size gracefully. I think the main case to handle > is when wptr < rptr. i.e. the write pointer has wrapped around to > the beginning of the buffer, but the read ptr has not. If one just > adds space in this case, the read pointer will start going into as > yet unwritten space thinking it is picking up valid data. > > > > > > > > One way to handle this would be copying data so that the freshly > resized buffer is coherent again. Another would be to somehow grow > the buffer in the space between the rptr and the wptr. This, > however, without moving data around, even if it were possible, would > result in fragmentation. > > > > > > If I were to just do an implementation without further research > it might look like this: > > > > > > 1. If wptr > rptr, just realloc. > > > 2. If rptr > wptr, move all of the elements between buf (array > start pointer) and wptr to space after the wrapping point of the > original, smaller, buffer. > > > > > > If we could also grow the buffer at the starting end, maybe we > could decide whether adding at the start or the end would result in > more copying. > > > > > > I'm not as concerned about algorithms for downsizing the buffer > to conserve space. I think instead of dealing with copying in this > case, the downsizing might just be done whenever the buffer runs > dry, and if no new interesting requests are pending, drop down to > some reasonable default size. > > > > > > Any thoughts or ideas would certainly be appreciated. > > > > > > > > > > > > > Best, > > > > Bogdan > > > > > > > > On Mon, Feb 16, 2009 at 3:12 AM, James Snyder <jbsnyder at fanplastic.org > > wrote: > > > > > Hi - > > > > > > > > > > I've dropped in another large ADC commit. I've mentioned > most of what was done in the commit message, but here's a rundown: > > > > > > > > > > - When samples are available from ADC, they're initially > copied into an elua buf. > > > > > - buf length is adjusted according to number of expected > samples coming in (when burst is requested, buf is resized to > accomodate the number of burst samples, size is dropped back down > when single samples are requested) > > > > > - if smoothing is enabled, and has no samples, smoothing > buffer (not an elua buf) is filled first to warm up the filter, then > samples begin to accumulate in the main buffer. > > > > > - a flush function has been added to manually clear out both > smoothing and primary buffers in case one doesn't want old samples > or old smoothing data being used for future measurements > > > > > > > > > > Also, I forgot to mention one thing in the commit message: > As per a discussion with Bogdan, the type checking on buf_write and > buf_read have been pulled out. > > > > > > > > > > One adjustment that I'd like to consider before the 0.6 > freeze is to remove the option for blocking and non-blocking as it > applies to sample and burst functions (used to initiate sampling) > and to instead make these always non-blocking, and never have them > return any samples (only errors, if needed). A separate function, > say getsamples would pull in data collected using either mode. > Right now, if one uses non-blocking mode, samples will always be > returned for the last time you ran sample or burst. This means that > if you want to get the data already requested, you also have to > always request new samples, even if you don't want them. > > > > > > > > > > I should be able to make this change with minimal code > changes, but I haven't done it yet because it changes the pre- > existing paradigm, and I wanted to get these changes in sooner > rather than later :-) > > > > > > > > > > I think it might just take me another hour or so to get > adjustments along those lines working. There wouldn't be as long of > a delay as this ADC commit. > > > > > > > > > > Suggestions/comments are welcome :-) > > > > > > > > > > -jsnyder > > > > > _______________________________________________ > > > > > Elua-dev mailing list > > > > > Elua-dev at lists.berlios.de > > > > > https://lists.berlios.de/mailman/listinfo/elua-dev > > > > > > > > > > > > > > > > > _______________________________________________ Elua-dev > mailing list Elua-dev at lists.berlios.de https://lists.berlios.de/mailman/listinfo/elua-dev > > _______________________________________________ > > > Elua-dev mailing list > > > Elua-dev at lists.berlios.de > > > https://lists.berlios.de/mailman/listinfo/elua-dev > > > > > > > > > > > > > > > > -- > James Snyder > Biomedical Engineering > Northwestern University > jbsnyder at fanplastic.org > http://fanplastic.org/key.txt > ph: (847) 644-2322 > > > > > > > > _______________________________________________ > > > > > Elua-dev mailing list > > > Elua-dev at lists.berlios.de > > > https://lists.berlios.de/mailman/listinfo/elua-dev > > > > > > > > > > > > > > _______________________________________________ Elua-dev mailing > list Elua-dev at lists.berlios.de https://lists.berlios.de/mailman/listinfo/elua-dev > > > _______________________________________________ > > Elua-dev mailing list > > Elua-dev at lists.berlios.de > > https://lists.berlios.de/mailman/listinfo/elua-dev > > > > > > > > > _______________________________________________ Elua-dev mailing > list Elua-dev at lists.berlios.de https://lists.berlios.de/mailman/listinfo/elua-dev > > _______________________________________________ > Elua-dev mailing list > Elua-dev at lists.berlios.de > https://lists.berlios.de/mailman/listinfo/elua-dev > > > _______________________________________________ > Elua-dev mailing list > Elua-dev at lists.berlios.de > https://lists.berlios.de/mailman/listinfo/elua-dev -- James Snyder Biomedical Engineering Northwestern University jbsnyder at fanplastic.org http://fanplastic.org/key.txt ph: (847) 644-2322 -------------- next part -------------- An HTML attachment was scrubbed... URL: https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/be9be655/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 194 bytes Desc: This is a digitally signed message part Url : https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/be9be655/attachment-0001.pgp |
In reply to this post by BogdanM
On Tue, Feb 17, 2009 at 19:09, James Snyder <jbsnyder at fanplastic.org> wrote:
> OK, that is working now, and may be ready to go into revision control soon. > One other question about this behavior: If I get a table, do I nil out > values that don't get new samples? > > i.e.: If user does the following: > > adc.sample(0) > adc.sample(0) > adc.sample(0) > a = adc.getsamples(0) > > yeilding a table like the following: > a = {56, 23, 34} > > and does: > adc.sample(0) > a = adc.getsamples(0,a) > Wasn't your second (and optional) param the number of samples to be read (if possible) from the buffer ? How exactly is "a" beeing used here ? If you adopted the "table to store" passed as a param option discussed, then why "a" is also on the left side here ? if the one new sample is 89, do I do I return this: > > a = {89, nil, nil} (i.e. a length 1 table) > or this? > a = {89, 23, 34} > or this?: > a = {56, 23, 34, 89} > > I suppose the last one is the least destructive, but it will grow it every > time... > > They're all a little bit weird, I suppose. The other option is to have a > parameter that defines whether when one uses this it uses one method or > another. I don't want to bloat this function call too much though... > You're right, 1st and 2nd are weird and would force frequent table concats on usage. 3rd seem to be lighter and I think it is already too complex for an ADC, to deserve another flag/param added to it. One compromise might be to allow the parameter that follows the table to > define what index in the table to start at: > > a = adc.getsamples(0,2,a,5) > > So, this would mean, get 2 samples from adc channel 0, put the results in > table a, starting at index 5, and give me the table back as a result. > Nice, although still complex but seems ok if the default for the last param is #a (#table-at-param-3). If this were done, I'm somewhat inclined to handle cases like the following > (where sample count is left off), by nil-ing any values in the array after > wherever the source samples end. > a = adc.getsamples(0,a,5) > or > a = adc.getsamples(0,a) > > Any thoughts? I know I've rambled a bit :-) > What function will clear the samples buffer ? Pls keep rambling :) ADC is comming out quite nice :) -jsnyder > Best Dado > > On Feb 17, 2009, at 9:44 AM, Bogdan Marinescu wrote: > > > > On Tue, Feb 17, 2009 at 5:15 PM, James Snyder <jbsnyder at fanplastic.org>wrote: > >> I'm not sure, actually. I could try always returning a table whether >> you're getting one value or 100. >> > > Ah, I didn't notive this before. Dado's right, let's keep the result type > consistent. Which means that, for now, let's just make our sample function > accept a table as a parameter and return all its data (be it 1 or 100 > samples) in that table. > > Best, > Bogdan > > >> ----- "Dado Sutter" <dadosutter at gmail.com> wrote: >> > Hello, >> > >> > > On Tue, Feb 17, 2009 at 10:53, James Snyder <jbsnyder at fanplastic.org>wrote: >> > >>> >>> > > >>> > ----- "Bogdan Marinescu" <bogdan.marinescu at gmail.com> wrote: >>> > >............. >>> > >>> > I suppose one could also pass an existing table to getsamples, and get >>> it returned with the results in it? >>> >> >> > I would also prefer that the functions return the same type on all cases >> (so a table passed as a parameter would do just fine for both issues). >> > Do you think that the small overhead caused by the table manipulation >> (instead of an number) justify the returning of a diferent type (a number) >> for critical speed sampling apps ? >> > >> > >> >>> > > > Best, >>> > > Bogdan >>> >> >> > Best >> > Dado >> > >> > >> > >> > >> > >> >>> > > >>> > > >>> >>>> > >>>> > >>>> > > > On Feb 16, 2009, at 11:15 AM, James Snyder wrote: >>>> >>>> > > >>>> >>>> > > > Hi - >>>> > > >>>> > > Thanks for the comments :-) >>>> > > >>>> > >>>> > > ----- "Bogdan Marinescu" <bogdan.marinescu at gmail.com> wrote: >>>> > >>>> > > > Hi, >>>> > > > >>>> > > > I still have to look at the code carefully and figure out what >>>> exactly you did there :), but for now a few simple observations: >>>> > > > >>>> > > > 1. it occured to me that since buf_init already expects a >>>> logarithmic parameter, it probably makes sense to make it expect two >>>> logarithmic parameters, so instead of this: >>>> > > > >>>> > > > int buf_set( unsigned resid, unsigned resnum, u8 logsize, size_t >>>> dsize ) >>>> > > > >>>> > > > we'll have this: >>>> > > > >>>> > > > int buf_set( unsigned resid, unsigned resnum, u8 logsize, size_t >>>> *log*dsize ) >>>> > > >>>> > > I was thinking about doing this, I'll make the change. >>>> > > >>>> > > > >>>> > > > 2. Since you did this: >>>> > > > >>>> > > > pbuf->logsize = logsize + ( pbuf->logdsize ); >>>> > > > >>>> > > > in buf_set, you probably need to modify this: >>>> > > > >>>> > > > #define BUF_MOD_INCR( p, m ) p->m = ( p->m + ( ( u16 )1 << >>>> p->logdsize ) ) & ( ( ( u16 )1 << ( p->logsize + p->logdsize) ) - 1 ) >>>> > > > >>>> > > > (because you add logdsize to logsize once again, and I don't think >>>> this is right). >>>> > > >>>> > > Ooops. That would likely be the cause of some of the random crashes >>>> I was seeing :-) (currently worked around somewhat) >>>> > > >>>> > > > >>>> > > > 3. The data size of an ADC is not always 16 bits, so we should add >>>> another (probably also logarithmic) parameter to elua_adc/adc_init_state. >>>> > > >>>> > >>>> > > True.. Are you anticipating use of higher or lower bit depth? >>>> There are certainly lower and higher ones out there, though I've not seen >>>> >16-bit ones built-in to uCs. If we want to accomodate larger sizes, the >>>> return types for some things will need adjustment, perhaps by defining a >>>> type that reflects the maximum size that will be returned? This type would >>>> be selected at compile time depending on the maximum bits-per-sample one >>>> might want to work with? >>>> > > >>>> > > Something like: >>>> > > >>>> > > #if MAX_ADC_BIT_RESOLUTION <= 8 >>>> > > typedef u8 t_adc_data >>>> > > #elif MAX_ADC_BIT_RESOLUTION <= 16 >>>> > > typedef u16 t_adc_data >>>> > > #elif MAX_ADC_BIT_RESOLUTION <= 32 >>>> > > typedef u32 t_adc_data >>>> > > #else >>>> > > #error "No matching type for MAX_ADC_BIT_RESOLUTION, check your >>>> selected bit depth or add larger type" >>>> > > #endif >>>> > > >>>> > > > >>>> > >>>> > > > 4. As for the change you proposed, as I said I still have to >>>> figure out what exactly your code does :), but for now it makes sense. I'll >>>> get back to you with more information. Fortunately we don't really have a >>>> pre-existing paradigm, we just have some proposals, so we can change >>>> everything we don't like. >>>> > > >>>> > >>>> > > OK, sounds good. The one midly complicated thing to make this >>>> approach work with dynamic buffer sizing is to have buf_set handle >>>> increasing buffer size gracefully. I think the main case to handle is when >>>> wptr < rptr. i.e. the write pointer has wrapped around to the beginning of >>>> the buffer, but the read ptr has not. If one just adds space in this case, >>>> the read pointer will start going into as yet unwritten space thinking it is >>>> picking up valid data. >>>> > > >>>> > >>>> > > One way to handle this would be copying data so that the freshly >>>> resized buffer is coherent again. Another would be to somehow grow the >>>> buffer in the space between the rptr and the wptr. This, however, without >>>> moving data around, even if it were possible, would result in fragmentation. >>>> > > >>>> > > If I were to just do an implementation without further research it >>>> might look like this: >>>> > > >>>> > > 1. If wptr > rptr, just realloc. >>>> > > 2. If rptr > wptr, move all of the elements between buf (array start >>>> pointer) and wptr to space after the wrapping point of the original, >>>> smaller, buffer. >>>> > > >>>> > > If we could also grow the buffer at the starting end, maybe we could >>>> decide whether adding at the start or the end would result in more copying. >>>> > > >>>> > > I'm not as concerned about algorithms for downsizing the buffer to >>>> conserve space. I think instead of dealing with copying in this case, the >>>> downsizing might just be done whenever the buffer runs dry, and if no new >>>> interesting requests are pending, drop down to some reasonable default size. >>>> > > >>>> > > Any thoughts or ideas would certainly be appreciated. >>>> > > >>>> > > > >>>> > >>>> > > > Best, >>>> > > > Bogdan >>>> > > > >>>> > > > On Mon, Feb 16, 2009 at 3:12 AM, James Snyder < >>>> jbsnyder at fanplastic.org> wrote: >>>> > > > > Hi - >>>> > > > > >>>> > > > > I've dropped in another large ADC commit. I've mentioned most >>>> of what was done in the commit message, but here's a rundown: >>>> > > > > >>>> > > > > - When samples are available from ADC, they're initially copied >>>> into an elua buf. >>>> > > > > - buf length is adjusted according to number of expected samples >>>> coming in (when burst is requested, buf is resized to accomodate the number >>>> of burst samples, size is dropped back down when single samples are >>>> requested) >>>> > > > > - if smoothing is enabled, and has no samples, smoothing buffer >>>> (not an elua buf) is filled first to warm up the filter, then samples begin >>>> to accumulate in the main buffer. >>>> > > > > - a flush function has been added to manually clear out both >>>> smoothing and primary buffers in case one doesn't want old samples or old >>>> smoothing data being used for future measurements >>>> > > > > >>>> > > > > Also, I forgot to mention one thing in the commit message: As >>>> per a discussion with Bogdan, the type checking on buf_write and buf_read >>>> have been pulled out. >>>> > > > > >>>> > > > > One adjustment that I'd like to consider before the 0.6 freeze >>>> is to remove the option for blocking and non-blocking as it applies to >>>> sample and burst functions (used to initiate sampling) and to instead make >>>> these always non-blocking, and never have them return any samples (only >>>> errors, if needed). A separate function, say getsamples would pull in data >>>> collected using either mode. Right now, if one uses non-blocking mode, >>>> samples will always be returned for the last time you ran sample or burst. >>>> This means that if you want to get the data already requested, you also >>>> have to always request new samples, even if you don't want them. >>>> > > > > >>>> > > > > I should be able to make this change with minimal code changes, >>>> but I haven't done it yet because it changes the pre-existing paradigm, and >>>> I wanted to get these changes in sooner rather than later :-) >>>> > > > > >>>> > > > > I think it might just take me another hour or so to get >>>> adjustments along those lines working. There wouldn't be as long of a delay >>>> as this ADC commit. >>>> > > > > >>>> > > > > Suggestions/comments are welcome :-) >>>> > > > > >>>> > > > > -jsnyder >>>> > > > > _______________________________________________ >>>> > > > > Elua-dev mailing list >>>> > > > > Elua-dev at lists.berlios.de >>>> > > > > https://lists.berlios.de/mailman/listinfo/elua-dev >>>> > > > > >>>> > > > >>>> > > > >>>> > > > _______________________________________________ Elua-dev mailing >>>> list Elua-dev at lists.berlios.de >>>> https://lists.berlios.de/mailman/listinfo/elua-dev >>>> > _______________________________________________ >>>> > > Elua-dev mailing list >>>> > > Elua-dev at lists.berlios.de >>>> > > https://lists.berlios.de/mailman/listinfo/elua-dev >>>> > > >>>> >>>> >>>> > > >>>> > > > > -- >>>> James Snyder >>>> Biomedical Engineering >>>> Northwestern University >>>> jbsnyder at fanplastic.org >>>> http://fanplastic.org/key.txt >>>> ph: (847) 644-2322 >>>> >>>> > > >>>> >>>> > > _______________________________________________ > >>>> > > Elua-dev mailing list >>>> > > Elua-dev at lists.berlios.de >>>> > > https://lists.berlios.de/mailman/listinfo/elua-dev >>>> > > >>>> > > >>>> >>> > >>> > > >>> > > _______________________________________________ Elua-dev mailing list >>> Elua-dev at lists.berlios.de >>> https://lists.berlios.de/mailman/listinfo/elua-dev >>> >>> > _______________________________________________ >>> > Elua-dev mailing list >>> > Elua-dev at lists.berlios.de >>> > https://lists.berlios.de/mailman/listinfo/elua-dev >>> > >>> > >> >> >> > >> > _______________________________________________ Elua-dev mailing list >> Elua-dev at lists.berlios.de >> https://lists.berlios.de/mailman/listinfo/elua-dev >> >> _______________________________________________ >> Elua-dev mailing list >> Elua-dev at lists.berlios.de >> https://lists.berlios.de/mailman/listinfo/elua-dev >> >> > _______________________________________________ > Elua-dev mailing list > Elua-dev at lists.berlios.de > https://lists.berlios.de/mailman/listinfo/elua-dev > > > -- > James Snyder > Biomedical Engineering > Northwestern University > jbsnyder at fanplastic.org > http://fanplastic.org/key.txt > ph: (847) 644-2322 > > > _______________________________________________ > Elua-dev mailing list > Elua-dev at lists.berlios.de > https://lists.berlios.de/mailman/listinfo/elua-dev > > An HTML attachment was scrubbed... URL: https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/bba8b41d/attachment-0001.html |
On Feb 17, 2009, at 4:44 PM, Dado Sutter wrote: > > > On Tue, Feb 17, 2009 at 19:09, James Snyder > <jbsnyder at fanplastic.org> wrote: > OK, that is working now, and may be ready to go into revision > control soon. > > One other question about this behavior: If I get a table, do I nil > out values that don't get new samples? > > i.e.: If user does the following: > > adc.sample(0) > adc.sample(0) > adc.sample(0) > a = adc.getsamples(0) > > yeilding a table like the following: > a = {56, 23, 34} > > and does: > adc.sample(0) > a = adc.getsamples(0,a) > > Wasn't your second (and optional) param the number of samples to be > read (if possible) from the buffer ? How exactly is "a" beeing used > here ? If you adopted the "table to store" passed as a param option > discussed, then why "a" is also on the left side here ? Hmm.. I didn't realize that that worked. Nice! I suppose I knew I was modifying a table that was being pointed to... Ok, so here's a breakdown, that I'm thinking of a = adc.getsamples(0) returns all pending samples on channel 0 as a table a = adc.getsamples(0,4) returns 4 pending samples adc.getsamples(0,a) appends any pending samples in the buffer to a adc.getsamples(0,3,a) appends 3 samples in the buffer to a starting after the samples already in the table adc.getsamples(0,3,a,2) inserts 3 samples to a, starting at index 2 It's horribly complicated, but there aren't any ambiguous cases in terms of the parameters being passed, and you can still use it with a really simple call. One really neat thing about using the starting index for the table is that you can do things like this: channels = {0, 3} adcvals = {} for i, v in ipairs(channels) do adc.sample(v) adcvals = getsamples(v,1,adcvals,i) end so adcvals[1] gets single samples from channel 0, and adcvals[2] gets values from channel 3. as far as clearing the samples buffer, the following takes care of that: adc.flush(chan_id) It will clear out both the smoothing and sample buffers. > > > if the one new sample is 89, do I do I return this: > > a = {89, nil, nil} (i.e. a length 1 table) > or this? > a = {89, 23, 34} > or this?: > a = {56, 23, 34, 89} > > I suppose the last one is the least destructive, but it will grow it > every time... > > They're all a little bit weird, I suppose. The other option is to > have a parameter that defines whether when one uses this it uses one > method or another. I don't want to bloat this function call too much > though... > > You're right, 1st and 2nd are weird and would force frequent table > concats on usage. 3rd seem to be lighter and I think it is already > too complex for an ADC, to deserve another flag/param added to it. > > One compromise might be to allow the parameter that follows the > table to define what index in the table to start at: > > a = adc.getsamples(0,2,a,5) > > So, this would mean, get 2 samples from adc channel 0, put the > results in table a, starting at index 5, and give me the table back > as a result. > > Nice, although still complex but seems ok if the default for the > last param is #a (#table-at-param-3). > > If this were done, I'm somewhat inclined to handle cases like the > following (where sample count is left off), by nil-ing any values in > the array after wherever the source samples end. > a = adc.getsamples(0,a,5) > or > a = adc.getsamples(0,a) > > Any thoughts? I know I've rambled a bit :-) > > What function will clear the samples buffer ? > Pls keep rambling :) ADC is comming out quite nice :) > > -jsnyder > > Best > Dado > > > > > > > > On Feb 17, 2009, at 9:44 AM, Bogdan Marinescu wrote: > >> >> >> On Tue, Feb 17, 2009 at 5:15 PM, James Snyder <jbsnyder at fanplastic.org >> > wrote: >> I'm not sure, actually. I could try always returning a table >> whether you're getting one value or 100. >> >> Ah, I didn't notive this before. Dado's right, let's keep the >> result type consistent. Which means that, for now, let's just make >> our sample function accept a table as a parameter and return all >> its data (be it 1 or 100 samples) in that table. >> >> Best, >> Bogdan >> >> ----- "Dado Sutter" <dadosutter at gmail.com> wrote: >> > Hello, >> > >> > >> > On Tue, Feb 17, 2009 at 10:53, James Snyder <jbsnyder at fanplastic.org >> > wrote: >> > >> > >> > >> > ----- "Bogdan Marinescu" <bogdan.marinescu at gmail.com> wrote: >> > >............. >> > >> >> > I suppose one could also pass an existing table to getsamples, >> and get it returned with the results in it? >> >> > I would also prefer that the functions return the same type on >> all cases (so a table passed as a parameter would do just fine for >> both issues). >> > Do you think that the small overhead caused by the table >> manipulation (instead of an number) justify the returning of a >> diferent type (a number) for critical speed sampling apps ? >> > >> > >> > >> > >> > Best, >> > > Bogdan >> >> > Best >> > Dado >> > >> > >> > >> > >> > >> > >> > >> >> > > >> > >> > >> > > >> > >> On Feb 16, 2009, at 11:15 AM, James Snyder wrote: >> >> > > >> > >> > > Hi - >> > > >> > > Thanks for the comments :-) >> > > >> > >> > > ----- "Bogdan Marinescu" <bogdan.marinescu at gmail.com> wrote: >> > >> > > > Hi, >> > > > >> > > > I still have to look at the code carefully and figure out >> what exactly you did there :), but for now a few simple observations: >> > > > >> > > > 1. it occured to me that since buf_init already expects a >> logarithmic parameter, it probably makes sense to make it expect >> two logarithmic parameters, so instead of this: >> > > > >> > > > int buf_set( unsigned resid, unsigned resnum, u8 logsize, >> size_t dsize ) >> > > > >> > > > we'll have this: >> > > > >> > > > int buf_set( unsigned resid, unsigned resnum, u8 logsize, >> size_t logdsize ) >> > > >> > > I was thinking about doing this, I'll make the change. >> > > >> > > > >> > > > 2. Since you did this: >> > > > >> > > > pbuf->logsize = logsize + ( pbuf->logdsize ); >> > > > >> > > > in buf_set, you probably need to modify this: >> > > > >> > > > #define BUF_MOD_INCR( p, m ) p->m = ( p->m + ( ( u16 )1 << p- >> >logdsize ) ) & ( ( ( u16 )1 << ( p->logsize + p->logdsize) ) - 1 ) >> > > > >> > > > (because you add logdsize to logsize once again, and I don't >> think this is right). >> > > >> > > Ooops. That would likely be the cause of some of the random >> crashes I was seeing :-) (currently worked around somewhat) >> > > >> > > > >> > > > 3. The data size of an ADC is not always 16 bits, so we >> should add another (probably also logarithmic) parameter to >> elua_adc/adc_init_state. >> > > >> > >> > > True.. Are you anticipating use of higher or lower bit depth? >> There are certainly lower and higher ones out there, though I've >> not seen >16-bit ones built-in to uCs. If we want to accomodate >> larger sizes, the return types for some things will need >> adjustment, perhaps by defining a type that reflects the maximum >> size that will be returned? This type would be selected at compile >> time depending on the maximum bits-per-sample one might want to >> work with? >> > > >> > > Something like: >> > > >> > > #if MAX_ADC_BIT_RESOLUTION <= 8 >> > > typedef u8 t_adc_data >> > > #elif MAX_ADC_BIT_RESOLUTION <= 16 >> > > typedef u16 t_adc_data >> > > #elif MAX_ADC_BIT_RESOLUTION <= 32 >> > > typedef u32 t_adc_data >> > > #else >> > > #error "No matching type for MAX_ADC_BIT_RESOLUTION, check your >> selected bit depth or add larger type" >> > > #endif >> > > >> > > > >> > >> > > > 4. As for the change you proposed, as I said I still have to >> figure out what exactly your code does :), but for now it makes >> sense. I'll get back to you with more information. Fortunately we >> don't really have a pre-existing paradigm, we just have some >> proposals, so we can change everything we don't like. >> > > >> > >> > > OK, sounds good. The one midly complicated thing to make this >> approach work with dynamic buffer sizing is to have buf_set handle >> increasing buffer size gracefully. I think the main case to handle >> is when wptr < rptr. i.e. the write pointer has wrapped around to >> the beginning of the buffer, but the read ptr has not. If one just >> adds space in this case, the read pointer will start going into as >> yet unwritten space thinking it is picking up valid data. >> > > >> > >> > > One way to handle this would be copying data so that the >> freshly resized buffer is coherent again. Another would be to >> somehow grow the buffer in the space between the rptr and the >> wptr. This, however, without moving data around, even if it were >> possible, would result in fragmentation. >> > > >> > > If I were to just do an implementation without further research >> it might look like this: >> > > >> > > 1. If wptr > rptr, just realloc. >> > > 2. If rptr > wptr, move all of the elements between buf (array >> start pointer) and wptr to space after the wrapping point of the >> original, smaller, buffer. >> > > >> > > If we could also grow the buffer at the starting end, maybe we >> could decide whether adding at the start or the end would result in >> more copying. >> > > >> > > I'm not as concerned about algorithms for downsizing the buffer >> to conserve space. I think instead of dealing with copying in this >> case, the downsizing might just be done whenever the buffer runs >> dry, and if no new interesting requests are pending, drop down to >> some reasonable default size. >> > > >> > > Any thoughts or ideas would certainly be appreciated. >> > > >> > > > >> > >> > > > Best, >> > > > Bogdan >> > > > >> > > > On Mon, Feb 16, 2009 at 3:12 AM, James Snyder <jbsnyder at fanplastic.org >> > wrote: >> > > > > Hi - >> > > > > >> > > > > I've dropped in another large ADC commit. I've mentioned >> most of what was done in the commit message, but here's a rundown: >> > > > > >> > > > > - When samples are available from ADC, they're initially >> copied into an elua buf. >> > > > > - buf length is adjusted according to number of expected >> samples coming in (when burst is requested, buf is resized to >> accomodate the number of burst samples, size is dropped back down >> when single samples are requested) >> > > > > - if smoothing is enabled, and has no samples, smoothing >> buffer (not an elua buf) is filled first to warm up the filter, >> then samples begin to accumulate in the main buffer. >> > > > > - a flush function has been added to manually clear out >> both smoothing and primary buffers in case one doesn't want old >> samples or old smoothing data being used for future measurements >> > > > > >> > > > > Also, I forgot to mention one thing in the commit message: >> As per a discussion with Bogdan, the type checking on buf_write and >> buf_read have been pulled out. >> > > > > >> > > > > One adjustment that I'd like to consider before the 0.6 >> freeze is to remove the option for blocking and non-blocking as it >> applies to sample and burst functions (used to initiate sampling) >> and to instead make these always non-blocking, and never have them >> return any samples (only errors, if needed). A separate function, >> say getsamples would pull in data collected using either mode. >> Right now, if one uses non-blocking mode, samples will always be >> returned for the last time you ran sample or burst. This means >> that if you want to get the data already requested, you also have >> to always request new samples, even if you don't want them. >> > > > > >> > > > > I should be able to make this change with minimal code >> changes, but I haven't done it yet because it changes the pre- >> existing paradigm, and I wanted to get these changes in sooner >> rather than later :-) >> > > > > >> > > > > I think it might just take me another hour or so to get >> adjustments along those lines working. There wouldn't be as long >> of a delay as this ADC commit. >> > > > > >> > > > > Suggestions/comments are welcome :-) >> > > > > >> > > > > -jsnyder >> > > > > _______________________________________________ >> > > > > Elua-dev mailing list >> > > > > Elua-dev at lists.berlios.de >> > > > > https://lists.berlios.de/mailman/listinfo/elua-dev >> > > > > >> > > > >> > > > >> > > > _______________________________________________ Elua-dev >> mailing list Elua-dev at lists.berlios.de https://lists.berlios.de/mailman/listinfo/elua-dev >> > _______________________________________________ >> > > Elua-dev mailing list >> > > Elua-dev at lists.berlios.de >> > > https://lists.berlios.de/mailman/listinfo/elua-dev >> > > >> >> > > >> > >> > >> > >> > >> -- >> James Snyder >> Biomedical Engineering >> Northwestern University >> jbsnyder at fanplastic.org >> http://fanplastic.org/key.txt >> ph: (847) 644-2322 >> >> > > >> >> > > _______________________________________________ >> > >> > > Elua-dev mailing list >> > > Elua-dev at lists.berlios.de >> > > https://lists.berlios.de/mailman/listinfo/elua-dev >> > > >> > > >> > >> > > >> > > _______________________________________________ Elua-dev >> mailing list Elua-dev at lists.berlios.de https://lists.berlios.de/mailman/listinfo/elua-dev >> >> > _______________________________________________ >> > Elua-dev mailing list >> > Elua-dev at lists.berlios.de >> > https://lists.berlios.de/mailman/listinfo/elua-dev >> > >> > >> >> > >> > _______________________________________________ Elua-dev mailing >> list Elua-dev at lists.berlios.de https://lists.berlios.de/mailman/listinfo/elua-dev >> >> _______________________________________________ >> Elua-dev mailing list >> Elua-dev at lists.berlios.de >> https://lists.berlios.de/mailman/listinfo/elua-dev >> >> >> _______________________________________________ >> Elua-dev mailing list >> Elua-dev at lists.berlios.de >> https://lists.berlios.de/mailman/listinfo/elua-dev > > -- > James Snyder > Biomedical Engineering > Northwestern University > jbsnyder at fanplastic.org > http://fanplastic.org/key.txt > ph: (847) 644-2322 > > > _______________________________________________ > Elua-dev mailing list > Elua-dev at lists.berlios.de > https://lists.berlios.de/mailman/listinfo/elua-dev > > > _______________________________________________ > Elua-dev mailing list > Elua-dev at lists.berlios.de > https://lists.berlios.de/mailman/listinfo/elua-dev -- James Snyder Biomedical Engineering Northwestern University jbsnyder at fanplastic.org http://fanplastic.org/key.txt ph: (847) 644-2322 -------------- next part -------------- An HTML attachment was scrubbed... URL: https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/9a28599d/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 194 bytes Desc: This is a digitally signed message part Url : https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/9a28599d/attachment-0001.pgp |
As a side note, the example:
> channels = {0, 3} > > adcvals = {} > > for i, v in ipairs(channels) do > adc.sample(v) > adcvals = getsamples(v,1,adcvals,i) > end Runs about as fast for adcscope.lua as the original integer version did. ~355 us per 4 channels collecting one sample each. When new tables were being created each cycle, about 430 us per cycle were used, which would jump up much higher every once in a while, probably due to garbage collection of the old tables that were being generated like crazy :-) I'm pretty sure that most of that 350 us is due to the lua vm and not the underlying C code of the ADC module. On Feb 17, 2009, at 5:05 PM, James Snyder wrote: > > > > On Feb 17, 2009, at 4:44 PM, Dado Sutter wrote: > >> >> >> On Tue, Feb 17, 2009 at 19:09, James Snyder >> <jbsnyder at fanplastic.org> wrote: >> OK, that is working now, and may be ready to go into revision >> control soon. >> >> One other question about this behavior: If I get a table, do I nil >> out values that don't get new samples? >> >> i.e.: If user does the following: >> >> adc.sample(0) >> adc.sample(0) >> adc.sample(0) >> a = adc.getsamples(0) >> >> yeilding a table like the following: >> a = {56, 23, 34} >> >> and does: >> adc.sample(0) >> a = adc.getsamples(0,a) >> >> Wasn't your second (and optional) param the number of samples to be >> read (if possible) from the buffer ? How exactly is "a" beeing used >> here ? If you adopted the "table to store" passed as a param >> option discussed, then why "a" is also on the left side here ? > > Hmm.. I didn't realize that that worked. Nice! I suppose I knew I > was modifying a table that was being pointed to... > > Ok, so here's a breakdown, that I'm thinking of > > a = adc.getsamples(0) > returns all pending samples on channel 0 as a table > > a = adc.getsamples(0,4) > returns 4 pending samples > > adc.getsamples(0,a) > appends any pending samples in the buffer to a > > adc.getsamples(0,3,a) > appends 3 samples in the buffer to a starting after the samples > already in the table > > adc.getsamples(0,3,a,2) > inserts 3 samples to a, starting at index 2 > > It's horribly complicated, but there aren't any ambiguous cases in > terms of the parameters being passed, and you can still use it with > a really simple call. > > One really neat thing about using the starting index for the table > is that you can do things like this: > > channels = {0, 3} > > adcvals = {} > > for i, v in ipairs(channels) do > adc.sample(v) > adcvals = getsamples(v,1,adcvals,i) > end > > so adcvals[1] gets single samples from channel 0, and adcvals[2] > gets values from channel 3. > > as far as clearing the samples buffer, the following takes care of > that: > adc.flush(chan_id) > > It will clear out both the smoothing and sample buffers. > > >> >> >> if the one new sample is 89, do I do I return this: >> >> a = {89, nil, nil} (i.e. a length 1 table) >> or this? >> a = {89, 23, 34} >> or this?: >> a = {56, 23, 34, 89} >> >> I suppose the last one is the least destructive, but it will grow >> it every time... >> >> They're all a little bit weird, I suppose. The other option is to >> have a parameter that defines whether when one uses this it uses >> one method or another. I don't want to bloat this function call too >> much though... >> >> You're right, 1st and 2nd are weird and would force frequent table >> concats on usage. 3rd seem to be lighter and I think it is already >> too complex for an ADC, to deserve another flag/param added to it. >> >> One compromise might be to allow the parameter that follows the >> table to define what index in the table to start at: >> >> a = adc.getsamples(0,2,a,5) >> >> So, this would mean, get 2 samples from adc channel 0, put the >> results in table a, starting at index 5, and give me the table back >> as a result. >> >> Nice, although still complex but seems ok if the default for the >> last param is #a (#table-at-param-3). >> >> If this were done, I'm somewhat inclined to handle cases like the >> following (where sample count is left off), by nil-ing any values >> in the array after wherever the source samples end. >> a = adc.getsamples(0,a,5) >> or >> a = adc.getsamples(0,a) >> >> Any thoughts? I know I've rambled a bit :-) >> >> What function will clear the samples buffer ? >> Pls keep rambling :) ADC is comming out quite nice :) >> >> -jsnyder >> >> Best >> Dado >> >> >> >> >> >> >> >> On Feb 17, 2009, at 9:44 AM, Bogdan Marinescu wrote: >> >>> >>> >>> On Tue, Feb 17, 2009 at 5:15 PM, James Snyder <jbsnyder at fanplastic.org >>> > wrote: >>> I'm not sure, actually. I could try always returning a table >>> whether you're getting one value or 100. >>> >>> Ah, I didn't notive this before. Dado's right, let's keep the >>> result type consistent. Which means that, for now, let's just make >>> our sample function accept a table as a parameter and return all >>> its data (be it 1 or 100 samples) in that table. >>> >>> Best, >>> Bogdan >>> >>> ----- "Dado Sutter" <dadosutter at gmail.com> wrote: >>> > Hello, >>> > >>> > >>> > On Tue, Feb 17, 2009 at 10:53, James Snyder <jbsnyder at fanplastic.org >>> > wrote: >>> > >>> > >>> > >>> > ----- "Bogdan Marinescu" <bogdan.marinescu at gmail.com> wrote: >>> > >............. >>> > >>> >>> > I suppose one could also pass an existing table to getsamples, >>> and get it returned with the results in it? >>> >>> > I would also prefer that the functions return the same type on >>> all cases (so a table passed as a parameter would do just fine for >>> both issues). >>> > Do you think that the small overhead caused by the table >>> manipulation (instead of an number) justify the returning of a >>> diferent type (a number) for critical speed sampling apps ? >>> > >>> > >>> > >>> > >>> > Best, >>> > > Bogdan >>> >>> > Best >>> > Dado >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> >>> > > >>> > >>> > >>> > > >>> > >>> On Feb 16, 2009, at 11:15 AM, James Snyder wrote: >>> >>> > > >>> > >>> > > Hi - >>> > > >>> > > Thanks for the comments :-) >>> > > >>> > >>> > > ----- "Bogdan Marinescu" <bogdan.marinescu at gmail.com> wrote: >>> > >>> > > > Hi, >>> > > > >>> > > > I still have to look at the code carefully and figure out >>> what exactly you did there :), but for now a few simple >>> observations: >>> > > > >>> > > > 1. it occured to me that since buf_init already expects a >>> logarithmic parameter, it probably makes sense to make it expect >>> two logarithmic parameters, so instead of this: >>> > > > >>> > > > int buf_set( unsigned resid, unsigned resnum, u8 logsize, >>> size_t dsize ) >>> > > > >>> > > > we'll have this: >>> > > > >>> > > > int buf_set( unsigned resid, unsigned resnum, u8 logsize, >>> size_t logdsize ) >>> > > >>> > > I was thinking about doing this, I'll make the change. >>> > > >>> > > > >>> > > > 2. Since you did this: >>> > > > >>> > > > pbuf->logsize = logsize + ( pbuf->logdsize ); >>> > > > >>> > > > in buf_set, you probably need to modify this: >>> > > > >>> > > > #define BUF_MOD_INCR( p, m ) p->m = ( p->m + ( ( u16 )1 << p- >>> >logdsize ) ) & ( ( ( u16 )1 << ( p->logsize + p->logdsize) ) - 1 ) >>> > > > >>> > > > (because you add logdsize to logsize once again, and I don't >>> think this is right). >>> > > >>> > > Ooops. That would likely be the cause of some of the random >>> crashes I was seeing :-) (currently worked around somewhat) >>> > > >>> > > > >>> > > > 3. The data size of an ADC is not always 16 bits, so we >>> should add another (probably also logarithmic) parameter to >>> elua_adc/adc_init_state. >>> > > >>> > >>> > > True.. Are you anticipating use of higher or lower bit >>> depth? There are certainly lower and higher ones out there, >>> though I've not seen >16-bit ones built-in to uCs. If we want to >>> accomodate larger sizes, the return types for some things will >>> need adjustment, perhaps by defining a type that reflects the >>> maximum size that will be returned? This type would be selected >>> at compile time depending on the maximum bits-per-sample one might >>> want to work with? >>> > > >>> > > Something like: >>> > > >>> > > #if MAX_ADC_BIT_RESOLUTION <= 8 >>> > > typedef u8 t_adc_data >>> > > #elif MAX_ADC_BIT_RESOLUTION <= 16 >>> > > typedef u16 t_adc_data >>> > > #elif MAX_ADC_BIT_RESOLUTION <= 32 >>> > > typedef u32 t_adc_data >>> > > #else >>> > > #error "No matching type for MAX_ADC_BIT_RESOLUTION, check >>> your selected bit depth or add larger type" >>> > > #endif >>> > > >>> > > > >>> > >>> > > > 4. As for the change you proposed, as I said I still have to >>> figure out what exactly your code does :), but for now it makes >>> sense. I'll get back to you with more information. Fortunately we >>> don't really have a pre-existing paradigm, we just have some >>> proposals, so we can change everything we don't like. >>> > > >>> > >>> > > OK, sounds good. The one midly complicated thing to make this >>> approach work with dynamic buffer sizing is to have buf_set handle >>> increasing buffer size gracefully. I think the main case to >>> handle is when wptr < rptr. i.e. the write pointer has wrapped >>> around to the beginning of the buffer, but the read ptr has not. >>> If one just adds space in this case, the read pointer will start >>> going into as yet unwritten space thinking it is picking up valid >>> data. >>> > > >>> > >>> > > One way to handle this would be copying data so that the >>> freshly resized buffer is coherent again. Another would be to >>> somehow grow the buffer in the space between the rptr and the >>> wptr. This, however, without moving data around, even if it were >>> possible, would result in fragmentation. >>> > > >>> > > If I were to just do an implementation without further >>> research it might look like this: >>> > > >>> > > 1. If wptr > rptr, just realloc. >>> > > 2. If rptr > wptr, move all of the elements between buf (array >>> start pointer) and wptr to space after the wrapping point of the >>> original, smaller, buffer. >>> > > >>> > > If we could also grow the buffer at the starting end, maybe we >>> could decide whether adding at the start or the end would result >>> in more copying. >>> > > >>> > > I'm not as concerned about algorithms for downsizing the >>> buffer to conserve space. I think instead of dealing with copying >>> in this case, the downsizing might just be done whenever the >>> buffer runs dry, and if no new interesting requests are pending, >>> drop down to some reasonable default size. >>> > > >>> > > Any thoughts or ideas would certainly be appreciated. >>> > > >>> > > > >>> > >>> > > > Best, >>> > > > Bogdan >>> > > > >>> > > > On Mon, Feb 16, 2009 at 3:12 AM, James Snyder <jbsnyder at fanplastic.org >>> > wrote: >>> > > > > Hi - >>> > > > > >>> > > > > I've dropped in another large ADC commit. I've mentioned >>> most of what was done in the commit message, but here's a rundown: >>> > > > > >>> > > > > - When samples are available from ADC, they're initially >>> copied into an elua buf. >>> > > > > - buf length is adjusted according to number of expected >>> samples coming in (when burst is requested, buf is resized to >>> accomodate the number of burst samples, size is dropped back down >>> when single samples are requested) >>> > > > > - if smoothing is enabled, and has no samples, smoothing >>> buffer (not an elua buf) is filled first to warm up the filter, >>> then samples begin to accumulate in the main buffer. >>> > > > > - a flush function has been added to manually clear out >>> both smoothing and primary buffers in case one doesn't want old >>> samples or old smoothing data being used for future measurements >>> > > > > >>> > > > > Also, I forgot to mention one thing in the commit >>> message: As per a discussion with Bogdan, the type checking on >>> buf_write and buf_read have been pulled out. >>> > > > > >>> > > > > One adjustment that I'd like to consider before the 0.6 >>> freeze is to remove the option for blocking and non-blocking as it >>> applies to sample and burst functions (used to initiate sampling) >>> and to instead make these always non-blocking, and never have them >>> return any samples (only errors, if needed). A separate function, >>> say getsamples would pull in data collected using either mode. >>> Right now, if one uses non-blocking mode, samples will always be >>> returned for the last time you ran sample or burst. This means >>> that if you want to get the data already requested, you also have >>> to always request new samples, even if you don't want them. >>> > > > > >>> > > > > I should be able to make this change with minimal code >>> changes, but I haven't done it yet because it changes the pre- >>> existing paradigm, and I wanted to get these changes in sooner >>> rather than later :-) >>> > > > > >>> > > > > I think it might just take me another hour or so to get >>> adjustments along those lines working. There wouldn't be as long >>> of a delay as this ADC commit. >>> > > > > >>> > > > > Suggestions/comments are welcome :-) >>> > > > > >>> > > > > -jsnyder >>> > > > > _______________________________________________ >>> > > > > Elua-dev mailing list >>> > > > > Elua-dev at lists.berlios.de >>> > > > > https://lists.berlios.de/mailman/listinfo/elua-dev >>> > > > > >>> > > > >>> > > > >>> > > > _______________________________________________ Elua-dev >>> mailing list Elua-dev at lists.berlios.de https://lists.berlios.de/mailman/listinfo/elua-dev >>> > _______________________________________________ >>> > > Elua-dev mailing list >>> > > Elua-dev at lists.berlios.de >>> > > https://lists.berlios.de/mailman/listinfo/elua-dev >>> > > >>> >>> > > >>> > >>> > >>> > >>> > >>> -- >>> James Snyder >>> Biomedical Engineering >>> Northwestern University >>> jbsnyder at fanplastic.org >>> http://fanplastic.org/key.txt >>> ph: (847) 644-2322 >>> >>> > > >>> >>> > > _______________________________________________ >>> > >>> > > Elua-dev mailing list >>> > > Elua-dev at lists.berlios.de >>> > > https://lists.berlios.de/mailman/listinfo/elua-dev >>> > > >>> > > >>> > >>> > > >>> > > _______________________________________________ Elua-dev >>> mailing list Elua-dev at lists.berlios.de https://lists.berlios.de/mailman/listinfo/elua-dev >>> >>> > _______________________________________________ >>> > Elua-dev mailing list >>> > Elua-dev at lists.berlios.de >>> > https://lists.berlios.de/mailman/listinfo/elua-dev >>> > >>> > >>> >>> > >>> > _______________________________________________ Elua-dev mailing >>> list Elua-dev at lists.berlios.de https://lists.berlios.de/mailman/listinfo/elua-dev >>> >>> _______________________________________________ >>> Elua-dev mailing list >>> Elua-dev at lists.berlios.de >>> https://lists.berlios.de/mailman/listinfo/elua-dev >>> >>> >>> _______________________________________________ >>> Elua-dev mailing list >>> Elua-dev at lists.berlios.de >>> https://lists.berlios.de/mailman/listinfo/elua-dev >> >> -- >> James Snyder >> Biomedical Engineering >> Northwestern University >> jbsnyder at fanplastic.org >> http://fanplastic.org/key.txt >> ph: (847) 644-2322 >> >> >> _______________________________________________ >> Elua-dev mailing list >> Elua-dev at lists.berlios.de >> https://lists.berlios.de/mailman/listinfo/elua-dev >> >> >> _______________________________________________ >> Elua-dev mailing list >> Elua-dev at lists.berlios.de >> https://lists.berlios.de/mailman/listinfo/elua-dev > > -- > James Snyder > Biomedical Engineering > Northwestern University > jbsnyder at fanplastic.org > http://fanplastic.org/key.txt > ph: (847) 644-2322 > > _______________________________________________ > Elua-dev mailing list > Elua-dev at lists.berlios.de > https://lists.berlios.de/mailman/listinfo/elua-dev -- James Snyder Biomedical Engineering Northwestern University jbsnyder at fanplastic.org http://fanplastic.org/key.txt ph: (847) 644-2322 -------------- next part -------------- An HTML attachment was scrubbed... URL: https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/8182f45f/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 194 bytes Desc: This is a digitally signed message part Url : https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/8182f45f/attachment-0001.pgp |
In reply to this post by Dado Sutter
On Tue, Feb 17, 2009 at 20:05, James Snyder <jbsnyder at fanplastic.org> wrote:
> > Hmm.. I didn't realize that that worked. Nice! I suppose I knew I was > modifying a table that was being pointed to... > > Ok, so here's a breakdown, that I'm thinking of > > a = adc.getsamples(0) > returns all pending samples on channel 0 as a table > > a = adc.getsamples(0,4) > returns 4 pending samples > > adc.getsamples(0,a) > appends any pending samples in the buffer to a > > adc.getsamples(0,3,a) > appends 3 samples in the buffer to a starting after the samples already in > the table > > adc.getsamples(0,3,a,2) > inserts 3 samples to a, starting at index 2 > > It's horribly complicated, but there aren't any ambiguous cases in terms of > the parameters being passed, and you can still use it with a really simple > call. > Let's hear some more opinions before confirming this James. One really neat thing about using the starting index for the table is that > you can do things like this: > > channels = {0, 3} > > adcvals = {} > > for i, v in ipairs(channels) do > adc.sample(v) > adcvals = getsamples(v,1,adcvals,i) > end > You mean: for i, v in ipairs(channels) do adc.sample(v) getsamples(v,1,adcvals,i) end Right ? Even if getsamples returned just the values, couldn't the same be done by: for i, v in ipairs(channels) do adc.sample(v) adcvals[i] = getsamples(v,1) end I'm afraid I'm not seeing the real gain here. so adcvals[1] gets single samples from channel 0, and adcvals[2] gets values > from channel 3. > > as far as clearing the samples buffer, the following takes care of that: > adc.flush(chan_id) > > It will clear out both the smoothing and sample buffers. > Ah, great, it is there then ! It is just that it did not came with your initial doc so I haven't been presented to adc.flush :) Thanks ! Best Dado -------------- next part -------------- An HTML attachment was scrubbed... URL: https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/61a8e074/attachment.html |
In reply to this post by Dado Sutter
On Tue, Feb 17, 2009 at 8:05 PM, James Snyder <jbsnyder at fanplastic.org> wrote:
> Ok, so here's a breakdown, that I'm thinking of > a = adc.getsamples(0) > returns all pending samples on channel 0 as a table > a = adc.getsamples(0,4) > returns 4 pending samples > adc.getsamples(0,a) > appends any pending samples in the buffer to a > adc.getsamples(0,3,a) > appends 3 samples in the buffer to a starting after the samples already in > the table > adc.getsamples(0,3,a,2) > inserts 3 samples to a, starting at index 2 > It's horribly complicated, but there aren't any ambiguous cases in terms of > the parameters being passed, and you can still use it with a really simple > call. I hope I'm not being too nosy and I apologize if this has been already discussed here... but have you considered splitting the functionality into two functions? Maybe you could use a = adc.getsamples(chan, num) and adc.addsamples(chan, num, tab) The first would always return a table with num samples, while the second would always append num samples to the "tab" table. BTW, is the offset parameter really important? Wouldn't reloading samples over a previous buffer be too error prone? That's why I'm suggesting just an append semantics... Another point to consider would be the need to pass the channel number. An alternative would be to make something like: c = adc.channel(0) a = c:getsamples(10) where the channel itself would be considered a first class object. Finally, have you considered using an API closer to LTN12? This may facilitate the future use of filters over a sample stream, if there is such a thing... :o) http://lua-users.org/wiki/FiltersSourcesAndSinks Andr? |
In reply to this post by Dado Sutter
Hello,
On Tue, Feb 17, 2009 at 20:21, James Snyder <jbsnyder at fanplastic.org> wrote: > As a side note, the example: > > channels = {0, 3} > > adcvals = {} > > for i, v in ipairs(channels) do > adc.sample(v) > adcvals = getsamples(v,1,adcvals,i) > end > > above example seems to be very common and the channel sampling should be as "synchronized" as possible. Not wanting to add still more complexity, it seems that the channels table (ie:) should also be a param to "atomize" the sampling. > Runs about as fast for adcscope.lua as the original integer version did. > ~355 us per 4 channels collecting one sample each > Not bad but I expected more from LM adc :( > When new tables were being created each cycle, about 430 us per cycle were > used, which would jump up much higher every once in a while, probably due to > garbage collection of the old tables that were being generated like crazy > :-) > gc needs to be stopped or manually controlled here. Sampling times and sampling intervals needs to be fully predictable. I'm pretty sure that most of that 350 us is due to the lua vm and not the > underlying C code of the ADC module. > Yes, probably but gc is beeing forced by the adc code. Best Dado -------------- next part -------------- An HTML attachment was scrubbed... URL: https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/646c2807/attachment.html |
In reply to this post by Dado Sutter
On Feb 17, 2009, at 5:25 PM, Dado Sutter wrote: > > > On Tue, Feb 17, 2009 at 20:05, James Snyder > <jbsnyder at fanplastic.org> wrote: > > Hmm.. I didn't realize that that worked. Nice! I suppose I knew I > was modifying a table that was being pointed to... > > Ok, so here's a breakdown, that I'm thinking of > > a = adc.getsamples(0) > returns all pending samples on channel 0 as a table > > a = adc.getsamples(0,4) > returns 4 pending samples > > adc.getsamples(0,a) > appends any pending samples in the buffer to a > > adc.getsamples(0,3,a) > appends 3 samples in the buffer to a starting after the samples > already in the table > > adc.getsamples(0,3,a,2) > inserts 3 samples to a, starting at index 2 > > It's horribly complicated, but there aren't any ambiguous cases in > terms of the parameters being passed, and you can still use it with > a really simple call. > > Let's hear some more opinions before confirming this James. Sure. I'll hold off on comitting. > > > One really neat thing about using the starting index for the table > is that you can do things like this: > > channels = {0, 3} > > adcvals = {} > > for i, v in ipairs(channels) do > adc.sample(v) > adcvals = getsamples(v,1,adcvals,i) > end > > You mean: > > for i, v in ipairs(channels) do > adc.sample(v) > getsamples(v,1,adcvals,i) > end > > Right ? > Even if getsamples returned just the values, couldn't the same be > done by: > > for i, v in ipairs(channels) do > adc.sample(v) > adcvals[i] = getsamples(v,1) > end > > I'm afraid I'm not seeing the real gain here. Right, sorry, I copied and pasted examples from a few different sources (although that version does work). The correct version would be (without having defined anything as local): channels = {0,3} adcvals = {} for i, v in ipairs(channels) do adc.sample(v) adc.getsamples(v,1,adcvals,i) end As far as vs the other option ( adcvals[i] = getsamples(v,1) ), this will only work if getsamples would return integers in that case. I switched my local version over to return only tables regardless of the count of returned samples. You had mentioned this earlier, and I agree, that having a function return different types depending on situation could be problematic. If adc.getsamples only ever returns tables, you could do this: for i, v in ipairs(channels) do adc.sample(v) adcvals[i] = adc.getsamples(v,1)[1] end But you still get a spray of tables coming out of the function that GC has to clean up (AFAIK, but I could be wrong). This approach runs at about 430 us per cycle if 4 channels are having samples requested and pulled out of the buffer in this fashion. Another way to get this behavior, however is to use this approach: for i, v in ipairs(channels) do adc.sample(v) adc.getsamples(v,1,adcvals,i) end This runs about 355 us per cycle (4 channels), and puts values in the same locations of adcvals, and doesn't result in new table creation. > > > so adcvals[1] gets single samples from channel 0, and adcvals[2] > gets values from channel 3. > > as far as clearing the samples buffer, the following takes care of > that: > adc.flush(chan_id) > > It will clear out both the smoothing and sample buffers. > > Ah, great, it is there then ! > It is just that it did not came with your initial doc so I haven't > been presented to adc.flush :) > > Thanks ! > Best > Dado > > > _______________________________________________ > Elua-dev mailing list > Elua-dev at lists.berlios.de > https://lists.berlios.de/mailman/listinfo/elua-dev -- James Snyder Biomedical Engineering Northwestern University jbsnyder at fanplastic.org http://fanplastic.org/key.txt ph: (847) 644-2322 -------------- next part -------------- An HTML attachment was scrubbed... URL: https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/2dd79592/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 194 bytes Desc: This is a digitally signed message part Url : https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/2dd79592/attachment-0001.pgp |
In reply to this post by Andre Carregal-2
On Tue, Feb 17, 2009 at 20:36, Andre Carregal <carregal at pobox.com> wrote:
> > I hope I'm not being too nosy and I apologize if this has been already > discussed here... but have you considered splitting the functionality > into two functions? > > Maybe you could use > > a = adc.getsamples(chan, num) > > and > > adc.addsamples(chan, num, tab) > > The first would always return a table with num samples, while the > second would always append num samples to the "tab" table. Sounds better indeed. BTW, is the offset parameter really important? Wouldn't reloading > samples over a previous buffer be too error prone? That's why I'm > suggesting just an append semantics... My point too, as it can be easially done by normal table indexing code if/when needed. Another point to consider would be the need to pass the channel > number. An alternative would be to make something like: > > c = adc.channel(0) > a = c:getsamples(10) > > where the channel itself would be considered a first class object. Nice too, clearer and semantically stronger. But it should support a list/table of channels, to "atomize" the best it can the sampling and I don't see how this notation could be used here. Finally, have you considered using an API closer to LTN12? This may > facilitate the future use of filters over a sample stream, if there is > such a thing... :o) > http://lua-users.org/wiki/FiltersSourcesAndSinks That I will have to take a look because it is new to me :) Thanks ! <http://lua-users.org/wiki/FiltersSourcesAndSinks> Andr? Welcome Aboard Andr? ! :) Best Dado -------------- next part -------------- An HTML attachment was scrubbed... URL: https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/eb89f4b3/attachment.html |
In reply to this post by Dado Sutter
On Feb 17, 2009, at 5:40 PM, Dado Sutter wrote: > Hello, > > On Tue, Feb 17, 2009 at 20:21, James Snyder > <jbsnyder at fanplastic.org> wrote: > As a side note, the example: > >> channels = {0, 3} >> >> adcvals = {} >> >> for i, v in ipairs(channels) do >> adc.sample(v) >> adcvals = getsamples(v,1,adcvals,i) >> end > > > Well, I'll have to find some more time to review this from the > start. The above example seems to be very common and the channel > sampling should be as "synchronized" as possible. Not wanting to add > still more complexity, it seems that the channels table (ie:) should > also be a param to "atomize" the sampling. That would be nice. Hmm... > > > Runs about as fast for adcscope.lua as the original integer version > did. ~355 us per 4 channels collecting one sample each > > Not bad but I expected more from LM adc :( I'd like it to go faster as well. I haven't tried timing a lot of things in elua, so I'm not sure if this corresponds to general function call overhead for the VM. That said, this is one reason why we have burst :-) I've not pulled out a function generator to figure out what the absolute maximum limit is for this, but that should achieve or at least get quite near to the 1Msample/s that the LM hardware is rated for. > > > When new tables were being created each cycle, about 430 us per > cycle were used, which would jump up much higher every once in a > while, probably due to garbage collection of the old tables that > were being generated like crazy :-) > > gc needs to be stopped or manually controlled here. Sampling times > and sampling intervals needs to be fully predictable. I agree, though I'm not sure what the best strategy is here. As far as avoiding variable sample timing, I would use burst. It should be possible to set the mode to nonblocking, tell burst to collect, say, 100 samples, and pull samples from the buffer before it has completed the run. I've not tested this at all, though. Also, on read the buffer does disable interrupts for a very short period of time when the read pointers are updated, so this could delay a sample for however many instructions it takes to complete that small section. > > > I'm pretty sure that most of that 350 us is due to the lua vm and > not the underlying C code of the ADC module. > > Yes, probably but gc is beeing forced by the adc code. At this point I'm not explicitly telling telling the garbage collector to do anything. If it runs it is based on the heuristics built into the VM. I'm not sure what the best strategy would be in terms of requesting or preventing garbage collection. I'm open to comments on this front, for sure :-) -- James Snyder Biomedical Engineering Northwestern University jbsnyder at fanplastic.org http://fanplastic.org/key.txt ph: (847) 644-2322 -------------- next part -------------- An HTML attachment was scrubbed... URL: https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/7c9f0bd3/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 194 bytes Desc: This is a digitally signed message part Url : https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/7c9f0bd3/attachment.pgp |
In reply to this post by Dado Sutter
On Tue, Feb 17, 2009 at 8:48 PM, Dado Sutter <dadosutter at gmail.com> wrote:
>> Another point to consider would be the need to pass the channel >> number. An alternative would be to make something like: >> >> c = adc.channel(0) >> a = c:getsamples(10) >> >> where the channel itself would be considered a first class object. > > Nice too, clearer and semantically stronger. But it should support a > list/table of channels, to "atomize" the best it can the sampling and I > don't see how this notation could be used here. Why not simply channels = adc.channel(0, 3, 17, 21) -- four channels and then use a = channels:getsamples(10) which would get 10 samples for each channel? Andr? |
In reply to this post by Dado Sutter
On Tue, Feb 17, 2009 at 20:43, James Snyder <jbsnyder at fanplastic.org> wrote:
> ........ > As far as vs the other option ( adcvals[i] = getsamples(v,1) ), this will > only work if getsamples would return integers in that case. > Why ? adcvals would contain a result table for each chanel in my example. And pls remember we prefer to have functions returning predictable types too. I switched my local version over to return only tables regardless of the > count of returned samples. You had mentioned this earlier, and I agree, > that having a function return different types depending on situation could > be problematic. > Right and I have just mentioned it again (above) :) sorry :) > If adc.getsamples only ever returns tables, you could do this: > > for i, v in ipairs(channels) do > adc.sample(v) > adcvals[i] = adc.getsamples(v,1)[1] > end > Or simply: for i, v in ipairs(channels) do adc.sample(v) adcvals[i] = adc.getsamples(v,1) end ... and adcvals would be a table keeping the chanels' sampled tables. But you still get a spray of tables coming out of the function that GC has > to clean up (AFAIK, but I could be wrong). This approach runs at about 430 > us per cycle if 4 channels are having samples requested and pulled out of > the buffer in this fashion. > Right. GC can be tamed but RAM is still a precious good in the current MCUs. I think we should try to move more functionality to the C level. But I really need to stop and take a more carefull look on the whole adc API. It seems too complex for simple (single channel, non filtered, ....) sampling and we need to make it more "generic" too. Best Dado -------------- next part -------------- An HTML attachment was scrubbed... URL: https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/d5bce16c/attachment-0001.html |
On Feb 17, 2009, at 6:15 PM, Dado Sutter wrote: > > > On Tue, Feb 17, 2009 at 20:43, James Snyder > <jbsnyder at fanplastic.org> wrote: > ........ > As far as vs the other option ( adcvals[i] = getsamples(v,1) ), this > will only work if getsamples would return integers in that case. > > Why ? adcvals would contain a result table for each chanel in my > example. > And pls remember we prefer to have functions returning predictable > types too. > > I switched my local version over to return only tables regardless of > the count of returned samples. You had mentioned this earlier, and > I agree, that having a function return different types depending on > situation could be problematic. > > Right and I have just mentioned it again (above) :) sorry :) > > If adc.getsamples only ever returns tables, you could do this: > > for i, v in ipairs(channels) do > adc.sample(v) > adcvals[i] = adc.getsamples(v,1)[1] > end > > Or simply: > > for i, v in ipairs(channels) do > adc.sample(v) > adcvals[i] = adc.getsamples(v,1) > end > > ... and adcvals would be a table keeping the chanels' sampled tables. Ah, I get it now. I think I misunderstood before. I wasn't thinking in terms of nesting tables. This still has the same issue with making new tables each time the function is called, which Bogdan was referencing. If I use this approach in adcscope it takes around 510 us per cycle (4 channels). each time I do an assignment as follows: adcvals[i] = adc.getsamples(v,1) I displace the table at adcvals[i], and put a new one in its place. This old table has to be garbage collected. If we turn off, then we'll run out of memory as we continuously sample. > > But you still get a spray of tables coming out of the function that > GC has to clean up (AFAIK, but I could be wrong). This approach > runs at about 430 us per cycle if 4 channels are having samples > requested and pulled out of the buffer in this fashion. > > Right. GC can be tamed but RAM is still a precious good in the > current MCUs. > I think we should try to move more functionality to the C level. > But I really need to stop and take a more carefull look on the whole > adc API. It seems too complex for simple (single channel, non > filtered, ....) sampling and we need to make it more "generic" too. It certainly could be simpler, but there are tradeoffs. I plan to have a version that will work without buffering, so that you can just do simple single sample, single channel acquisition (it still would require issuing both adc.sample and adc.getsamples, though). This could be switched at compile time. As far as being generic is concerned, I think everything that's in there is easily portable to any of the other architectures that have ADC. The only platform-specific functions that are needed are the interrupt handler to put things in the buffer, a function to stop burst mode, and the functions for setting up sample and burst. If you mean in the sense of the API and whatnot, that may be the case. Ideas are welcome :-) -jsnyder -- James Snyder Biomedical Engineering Northwestern University jbsnyder at fanplastic.org http://fanplastic.org/key.txt ph: (847) 644-2322 -------------- next part -------------- An HTML attachment was scrubbed... URL: https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/cbf9df8f/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 194 bytes Desc: This is a digitally signed message part Url : https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/cbf9df8f/attachment.pgp |
In reply to this post by Andre Carregal-2
On Tue, Feb 17, 2009 at 21:15, Andre Carregal <carregal at pobox.com> wrote:
> > Why not simply > > channels = adc.channel(0, 3, 17, 21) -- four channels > > and then use > > a = channels:getsamples(10) Yesssss ! :) (and I wish I had 21 channels on one of my boards :) > which would get 10 samples for each channel? Snyder, ..... ah, you have already answered .......... :) Andr? > Dado > > _______________________________________________ > Elua-dev mailing list > Elua-dev at lists.berlios.de > https://lists.berlios.de/mailman/listinfo/elua-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/41f0c611/attachment.html |
In reply to this post by Dado Sutter
On Feb 17, 2009, at 5:48 PM, Dado Sutter wrote: > > > On Tue, Feb 17, 2009 at 20:36, Andre Carregal <carregal at pobox.com> > wrote: > > I hope I'm not being too nosy and I apologize if this has been already > discussed here... but have you considered splitting the functionality > into two functions? You are definitely not being nosy :-) More input is better. I'd much rather put together something resulting from a lot of discussion rather than get something that needs to be rewritten in a few months. > > > Maybe you could use > > a = adc.getsamples(chan, num) > > and > > adc.addsamples(chan, num, tab) > > The first would always return a table with num samples, while the > second would always append num samples to the "tab" table. > > Sounds better indeed. This would be cleaner, I agree. > > > BTW, is the offset parameter really important? Wouldn't reloading > samples over a previous buffer be too error prone? That's why I'm > suggesting just an append semantics... > > My point too, as it can be easially done by normal table indexing > code if/when needed. Hmm.. I won't say it isn't ugly in some ways. One way to make it less error prone would be if both the starting index and the number of samples to be copied were required if one were modifying a table. This way the portion of the table being modified is explicitly specified. > > > Another point to consider would be the need to pass the channel > number. An alternative would be to make something like: > > c = adc.channel(0) > a = c:getsamples(10) > > where the channel itself would be considered a first class object. > > Nice too, clearer and semantically stronger. But it should support a > list/table of channels, to "atomize" the best it can the sampling > and I don't see how this notation could be used here. > Why not simply > > channels = adc.channel(0, 3, 17, 21) -- four channels > > and then use > > a = channels:getsamples(10) > > which would get 10 samples for each channel? > > Andr? Ooh, that _would_ be nice... :-) I'll have to think about that... I'm still inclined to keep initiation of sampling and picking up of samples separate though, unless there is a way to seriously reduce the number of microseconds each function call takes. If channels:getsamples(10) does both initiation and collection of samples back to lua, the timing between samples collected might be deterministic, but if you're doing that repeatedly, the time in between each call will not be :-) Maybe something like: channels = adc.channel(0, 3, 17, 20) then: channels:sample() or channels:burst(count, frequency) and then use: a = channels:getsamples(10) to pick up samples. Hmm... > > > Finally, have you considered using an API closer to LTN12? This may > facilitate the future use of filters over a sample stream, if there is > such a thing... :o) > http://lua-users.org/wiki/FiltersSourcesAndSinks > > That I will have to take a look because it is new to me :) Thanks ! Hmm... I hadn't seen that. I'll check this out :-) -- James Snyder Biomedical Engineering Northwestern University jbsnyder at fanplastic.org http://fanplastic.org/key.txt ph: (847) 644-2322 -------------- next part -------------- An HTML attachment was scrubbed... URL: https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/d5bdef15/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 194 bytes Desc: This is a digitally signed message part Url : https://lists.berlios.de/pipermail/elua-dev/attachments/20090217/d5bdef15/attachment-0001.pgp |
Free forum by Nabble | Edit this page |