Browse Source
I made the mistake of not realising the dictionary was 'removing' duplicate entries by simply not adding the 'second' reading for a given time interval. An example of this is as follows, there are two readings within the fifty-sixth second (E.G. 07:03:56) but the second was being omitted from the dictionary storing the data after it was loaded into memory. I changed the return type to a list of tuples to preserve the raw part of the data (I.E. multiple readings per second). The intention here is so I can start from the 'raw' data without needing to load the data in numerous times during run-time. I've omitted the 'Id' column because I have no need for it in this context. If I do need it, though, I can add an extra item to the returned tuple (I.E. add r[0] to append) . This bug came about because I took most of the code from the initial 'load data' function. The original function converted the raw CSV data into a dictionary which tallied the total readings per second before returning it. This function doesn't do that. It leaves the data in a more raw state.stable
Craig Oates
3 years ago
1 changed files with 2 additions and 2 deletions
Reference in new issue