Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong timestamp for HR based on event_timestamps (HRM Swim .fit files) #122

Open
Kypaz opened this issue Mar 10, 2021 · 4 comments
Open

Comments

@Kypaz
Copy link

Kypaz commented Mar 10, 2021

Hi,
I have tried recently to integrate HR data from 'hr' fields at the end of the .fit file, because that's what HRM Swim sensor do.

No problem about that, and I've come across #69, but I believe we retrieve directly a timestamp and no need to do specific treatment.

So basically inside 'hr' field, we have 8 'event_timestamp' with associated timestamp, and at the end an array of 8 bpm values, so it's quite easy to match.

event_timestamp 1560.744140625
event_timestamp 1561.9326171875
event_timestamp 1561.9453125
event_timestamp 1563.3662109375
event_timestamp 1565.62109375
event_timestamp 1566.1171875
event_timestamp 1566.373046875
event_timestamp 1566.8525390625
event_timestamp_12 (250, 178, 123, 200, 119, 215, 124, 134, 135, 126, 153, 182)
filtered_bpm (73, 74, 74, 74, 74, 74, 75, 75)

Here's the problem : the timestamps seems "OK" but they do not quite match with the duration of the session.

I've attached a .fit file below, the maximum timestamp retrieved is 1567.8544921875 seconds = 26.11 minutes, but the overall duration of the session is 1h10

I believe all the bpm data are here, it's just that the timestamp associated are wrong

I think this issue was first adressed in #26 but close due to no example

hrmswim.fit.zip

@pR0Ps

Please ask if you need more information on my end

Thanks for the help !

Alexandre

[Edit : You can just import the .fit file into Garmin Connect or GoldenCheetah if you want to see the "real data", and the Heart Stream]

Capture d’écran 2021-03-10 à 11 41 57

[Edit 2 : For reference, Golden Cheetah seems to do it here at 'decodeHr' method]

[Edit 3 : For reference as well, see 'Plugin Example (HR)' here

@Kypaz
Copy link
Author

Kypaz commented Apr 7, 2021

Hello. Is there anyone to confirm the issue or not?

@Lingepumpe
Copy link

Lingepumpe commented Sep 15, 2021

Hi @Kypaz

I am looking at analysing "appended" hr data in fit files, did you ever get any futher with this?

In my fitfile there are a few times where the "normal" hr data was received (because the heartrate strap and watch were out of the water for a short while). Looking at fitparse the appended hr records look like this:

mesg_type.name == 'hr', get_values(): {'timestamp': datetime.datetime(2021, 9, 12, 7, 56, 4), 'event_timestamp': 2242909.0, 'fractional_timestamp': 0.0, 'filtered_bpm': 71, 'unknown_251': (0,)}
mesg_type.name == 'hr', get_values(): {'filtered_bpm': (71, 71, 71, 71, 71, 71, 71, 71), 'event_timestamp': 4.841796875, 'event_timestamp_12': (0, 80, 33, 185, 116, 102, 205, 217, 213, 206, 224, 53)}
mesg_type.name == 'hr', get_values(): {'filtered_bpm': (71, 71, 71, 71, 71, 71, 71, 71), 'event_timestamp': 18.416015625, 'event_timestamp_12': (0, 246, 122, 113, 25, 46, 121, 209, 175, 16, 161, 154)}
[... a few more more of tehse with increasing event_timestamp ...]
mesg_type.name == 'hr', get_values(): {'filtered_bpm': (88, 88, 88, 88, 88, 88, 88, 88), 'event_timestamp': 165.423828125, 'event_timestamp_12': (23, 44, 228, 13, 113, 61, 116, 90, 222, 85, 33, 91)}
mesg_type.name == 'hr', get_values(): {'filtered_bpm': (88, 88, 88, 89, 89, 89, 89, 89), 'event_timestamp': 173.4541015625, 'event_timestamp_12': (188, 55, 162, 25, 103, 197, 44, 29, 93, 209, 21, 93)}
mesg_type.name == 'hr', get_values(): {'timestamp': datetime.datetime(2021, 9, 12, 7, 59, 3), 'event_timestamp': 2243088.0, 'fractional_timestamp': 0.0, 'filtered_bpm': 89, 'unknown_251': (0,)}
mesg_type.name == 'hr', get_values(): {'filtered_bpm': (89, 89, 88, 88, 88, 88, 88, 88), 'event_timestamp': 11.14453125, 'event_timestamp_12': (0, 48, 83, 183, 109, 100, 120, 235, 38, 242, 74, 201)}
mesg_type.name == 'hr', get_values(): {'filtered_bpm': (89, 90, 90, 90, 89, 89, 89, 89), 'event_timestamp': 19.1826171875, 'event_timestamp_12': (36, 79, 27, 230, 198, 186, 108, 176, 81, 229, 183, 203)}
mesg_type.name == 'hr', get_values(): {'filtered_bpm': (89, 89, 90, 90, 90, 91, 91, 91), 'event_timestamp': 27.2001953125, 'event_timestamp_12': (114, 111, 74, 186, 170, 226, 124, 194, 94, 92, 217, 204)}
[...]

So the pattern to me seems to be I get one "hr" record with a timestamp, then a whole bunch of "hr" records with filtered_bpm set and an increasing event_timestamp. Then the pattern repeats with a fresh record with timestamp, and the following "hr" records with filtered_bpm restart the event_timestamp at zero again.

Comparing this to the output of the fit2csv tool from the fitsdk, I see the first few lines to look like this:

Data,12,hr,timestamp,"1000367764",,event_timestamp,"2242909.0",s,fractional_timestamp,"0.0",s,filtered_bpm,"71",bpm,unknown,"0",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
Data,11,hr,filtered_bpm,"71|71|71|71|71|71|71|71",bpm,event_timestamp_12,"0|80|33|185|116|102|205|217|213|206|224|53",,event_timestamp,"2242912.0|2242912.5205078125|2242913.1806640625|2242913.6005859375|2242914.4501953125|2242915.3408203125|2242916.201171875|2242916.841796875",s,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
Data,11,hr,filtered_bpm,"71|71|71|71|71|71|71|71",bpm,event_timestamp_12,"0|246|122|113|25|46|121|209|175|16|161|154",,event_timestamp,"2242917.5|2242917.9208984375|2242918.3603515625|2242920.7197265625|2242924.3681640625|2242926.7470703125|2242928.265625|2242930.416015625",s,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
Data,11,hr,filtered_bpm,"71|71|71|72|72|72|72|74",bpm,event_timestamp_12,"98|75|210|211|110|23|6|68|105|4|122|229",,event_timestamp,"2242930.845703125|2242931.28515625|2242931.7060546875|2242932.365234375|2242933.005859375|2242933.64453125|2242934.50390625|2242935.5849609375",s,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,

Note that I do not see multiple "event_timestamp" in records within one "filtered_bpm" through fitparse, unlike you seem to have? In the fit2csv tool I do see that "event_timestamp" is indeed multiple values.

Some observations:

  • The timestamp of datetime.datetime(2021, 9, 12, 7, 56, 4) is ~2h57min before the start of the activity. Even interpreting it as a UTC timezone datetime it is still 57min before the activity started, so it seems wrong.
  • As stated above, event_timestamp is a single value instead of a tuple in "filtered_bpm" messages, and also it seems to be a difference between the last "timestamp" event_timestamp, which is not the case within the fit2csv csv file.

@Lingepumpe
Copy link

After some looking into this, these are my observations so far:

  • The fact that the initial "timestamp" is early by 57min (when considered as UTC timestamp) seems to be simply because the HR strap was applied well in advance to starting the effort. Checking the GoldenCheetah decodeHR() function in https://github.com/GoldenCheetah/GoldenCheetah/blob/master/src/FileIO/FitRideFile.cpp - they will drop any HR data outside of the activity range (pre session start and post session end times). So in effect for my fitfile the first ~57mins will be dropped.
  • the introduction of hr records that contain a "timestamp" seems to be for synchronization purposes
  • fitparse actually does also see multiple "event_timestamp" values in the "filtered_bpm" containing records, but only the last is shown via get_values(), due to the dict nature of allowing only one key "event_timestamp". Looking at record.fields shows all of the event_timestamp fields:
[('timestamp', datetime.datetime(2021, 9, 12, 7, 56, 4)), ('event_timestamp', 2242909.0), ('fractional_timestamp', 0.0), ('filtered_bpm', 71), ('unknown_251', (0,))]  # initial synchronization record
[<FieldData: filtered_bpm: (71, 71, 71, 71, 71, 71, 71, 71) [bpm], def num: 6, type: uint8 (uint8), raw value: (71, 71, 71, 71, 71, 71, 71, 71)>,
<FieldData: event_timestamp: 0.0 [s], def num: 9, type: uint32 (uint32), raw value: 0.0>,
<FieldData: event_timestamp: 0.5205078125 [s], def num: 9, type: uint32 (uint32), raw value: 0.5205078125>,
<FieldData: event_timestamp: 1.1806640625 [s], def num: 9, type: uint32 (uint32), raw value: 1.1806640625>,
<FieldData: event_timestamp: 1.6005859375 [s], def num: 9, type: uint32 (uint32), raw value: 1.6005859375>,
<FieldData: event_timestamp: 2.4501953125 [s], def num: 9, type: uint32 (uint32), raw value: 2.4501953125>,
<FieldData: event_timestamp: 3.3408203125 [s], def num: 9, type: uint32 (uint32), raw value: 3.3408203125>,
<FieldData: event_timestamp: 4.201171875 [s], def num: 9, type: uint32 (uint32), raw value: 4.201171875>,
<FieldData: event_timestamp: 4.841796875 [s], def num: 9, type: uint32 (uint32), raw value: 4.841796875>,
<FieldData: event_timestamp_12: (0, 80, 33, 185, 116, 102, 205, 217, 213, 206, 224, 53), def num: 10, type: byte (byte), raw value: (0, 80, 33, 185, 116, 102, 205, 217, 213, 206, 224, 53)>]

The corresponding fit2csv tool output for this record is

Data,12,hr,timestamp,"1000367764",,event_timestamp,"2242909.0",s,fractional_timestamp,"0.0",s,filtered_bpm,"71",bpm,unknown,"0"
Data,11,hr,filtered_bpm,"71|71|71|71|71|71|71|71",bpm,event_timestamp_12,"0|80|33|185|116|102|205|217|213|206|224|53",,event_timestamp,"2242912.0|2242912.5205078125|2242913.1806640625|2242913.6005859375|2242914.4501953125|2242915.3408203125|2242916.201171875|2242916.841796875",s

Note that fitparse starts the event_timestamps at zero after each resync, while fit2csv continues with absolute event_timestamp values. This is not a problem by itself, but fitparse also uses the wrong zero value for the event_timestamp, making it impossible to use the event_timestamps of the tupled filtered_bpm values:
In the above example the resync is at event_timestamp of 2242909.0 [correctly shown by fitparse], but the tuple of filtered_bpm is listed at timestamps [0.0, 0.52, 1.18, 1.60....], when they infact should be shifted by 3 seconds to [3.0, 3.52, 4.18, 4.60], as the absolute timestamps as shown by fit2csv are "2242912.0|2242912.5205078125|2242913.1806640625|2242913.6005859375".

My questions currently are:

  • What would be a good way to show multiple event_timestamp values within a single record via fitparse get_values()?
  • Where in fitparse source could I adjust these event_timestamps to either show absolute values, like fit2csv, or at least show relative values relative to the last resync point? I have not been able to find where this takes place in the code.
  • Are the multiple event_timestamp values always present in stored hr data? It seems to be they make event_timestamp_12 values unnecessary (and with it the cumbersome computations to convert them to proper timestamps, that can be seen in python code in Problem on timestamp extraction #69 and in the GoldenCheetah source)

@polyvertex any thoughts?

@Lingepumpe
Copy link

After some more analysis, I realized that the event_timestamp value coming in records containing filtered_bpm tuples is not handled right in fitparse, and seems to be some undocumented extension (?), as it is also not mentioned in the "definition" record:

Definition,11,hr,filtered_bpm,8,,event_timestamp_12,12

As you can see only the event_timestamp_12 is mentioned here. So, instead of trying to use a undefined/undocumented field, I chose to do what GoldenCheetah and the code in #69 do: Use the event_timestamp_12 and some magic bit shifting calculations to calculate the timestamps for each "filtered_bpm" value.

Possibly this bug can be closed, saying that the event_timestamp field that @Kypaz is trying to use is not previoulsy defined, and hence does not work properly.

If possible it would be nice if fitparse could properly decode+accumulate these values anyhow (even if they are not previously defined), as the fit2csv utility does, or otherwise discard them if they cannot be properly decoded due to a lack of definition.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants