This could do with being generalised with the other save functions (in
io_services). For now, it's job is to save the list of readings which
contain four or more readings-per-second and at least one of them is
over 39. The specificness of the 39 is because the test data used is
from 'factory1' (Light Meter) and that is the threshold for triggering
the lights connected to 'gallery1' in the gallery.
This is not exactly encoded into the code itself. It's implied and
needs to be used with that in mind. It's more of 'save list'
function. I will probably rename/refactor this in the future depending
on how the project develops.
I made the mistake of not realising the dictionary was 'removing'
duplicate entries by simply not adding the 'second' reading for a
given time interval. An example of this is as follows,
there are two readings within the fifty-sixth second (E.G. 07:03:56)
but the second was being omitted from the dictionary storing the data
after it was loaded into memory.
I changed the return type to a list of tuples to preserve the raw part
of the data (I.E. multiple readings per second). The intention here is
so I can start from the 'raw' data without needing to load the data in
numerous times during run-time. I've omitted the 'Id' column because I
have no need for it in this context. If I do need it, though, I can
add an extra item to the returned tuple (I.E. add r[0] to append) .
This bug came about because I took most of the code from the initial
'load data' function. The original function converted the raw CSV data
into a dictionary which tallied the total readings per second before
returning it. This function doesn't do that. It leaves the data in a
more raw state.