double and triple buffering was a highly lauded feature in early 3d display card drivers for this very reason. in fact, just before 3dfx was purchased, their big thing was using screen buffering to create all kinds of cool gradient-based filtering effects. after they were bought, nvidia used this tech for a very short period before they turned all their chips into geometry engines.
AFAIK all modern graphics cards have double and triple buffering still. At a minimum, double buffering is always used, drawing is always done in a back buffer, vsync synchronises the buffer switch to the vsync signal to avoid "tearing", vsync off just switches buffers when the back buffer is full. Triple buffering can be enabled via the drivers. Triple buffering can be done in two different ways, with either the latest complete buffer used when switching or in sequence. If in sequence triple buffering is used, there is added latency, since the displayed frame is always 2 frames behind the currently rendering frame. DirectX uses in sequence triple buffering if it's turned on. OpenGL uses the latest ready buffer, so it provides both a performance benefit (the GPU can render during a buffer switch) and low latency (you are only one frame behind the one currently being drawn).
Vsync is something that IMO should be user controlled at a driver level / overall system level. The system should allow the user to turn it off if they prefer lower latency and on if they prefer no possiblity of tearing. Not allowing the user to choose IS stupid.
I use a 3rd party program (I think it's USB Overdrive) to kill acceleration in OSX. Couldn't stand the implementation in Snow Leopard, was really annoying. Now I just boot into Win7 with BootCamp. The MacBook Pro makes a nice Windows laptop