Because tracks already have AI nodes that follow the geometry, would it be feasible to have the camera logic read from the AI nodes to get its initial plane? Yaw would try to orbit to the back of the car as usual. But when the track hits a slope or starts bucking around:




The camera would follow suit, with pitch and roll moving with the track and having basically the same inertia that yaw does.
(An alternate way of anchoring the camera could be having it match the car, but shockwaves, bombs etc throw a few wrenches into the works)