I think you're trying to apply concepts that the receiver is managing differently.
The placement of the 1PPS is reported with a quantization error based on the receiver's clocking speed and it's understanding of time.
It uses GPS to quantify the performance of the local clock (oscillator), how it deviates from the perfect frequency. The receiver calls this the drift. The measurement timebase in the receiver is nominally 1ms, for non-zero drift this will be slightly more or less than 1ms, over time this error will accumulate as a bias. If this bias accumulates to +1ms the receiver can shorten the integration time by 1ms to reduce the distance local time has walked off.
If the local clock is slow, by say 1ppm, that basically means each second in receiver time is actually 1s+1us in real time, after 1000s the real elapsed time is 1000.001s and to remedy that 1ms is taken out of the receivers timeline, a slewing event that will occur every ~1000 seconds.
The UBX-NAV-CLOCK reports the clock bias and drift we're talking about, and defines the time the measurements in the receiver were made. This is the receiver's own estimate of when they occured, to the sub-nanosecond level, based on it's observation of the local oscillator vs timing of the GPS signals.
Based on it's same understanding of this timeline the receiver places the 1PPS on one of the edges of a multi-MHz clock derived from the same clock local source. The synchronous nature of this design means there's going to be a quantization error in this placement based on the granularity of the clock. The receiver reports how far off it is via UBX-TIM-TP