We don't really test with files that big so I don't know for sure but... I've heard of a number of people fragmenting large entries into smaller entries and wrapping them in a stream like class for easily getting them in and out of the cache. Does that make sense?
I could give more detail if I'm being too abstract.
What it would do is break your multi-gig file into smaller chunks and give them specialized key names derived from the original (i.e. myKey-1of100, myKey-2of100 etc). The stream would be an abstraction that as bytes are requested from it pulls the section of the file that is needed from the cache and returns then proper portion without the user of the stream knowing it's happening (basically making the stored entry look like one big stream.