it's a bit more complicated than that. there are two types of ssd caching algorithms.
the first one is the "OS level" kind that intel RST uses, which has visibility of "file" information. however, a file is just a bunch of blocks of data typically called inodes that are linked to each other via little pointers. ideally, these inodes are consecutive on disk, and the OS level ssd caching algorithms attempt to put all the blocks of data of a file in consecutive order. this is easy when the file is written to the storage hierarchy in consecutive order, but a bit harder when the file is written out in interleaved or out of order chunks. at the OS level though, you can see what chunks constitute a single file and reorder them accordingly.
the second one is the "disk level" algorithm that hardware SSD caches use. these can't see file information, because the concept of a "file" only exists at the OS level. what these algorithms try to do instead is write all blocks out to the cache in a log-style (every write happens to the end of a log). this optimizes for SSD wear. Further, it's only a little slower than contiguous placement when reading from an SSD because SSD random access performance is very good. On a spinning disk, this would absolutely kill read performance. to reorder blocks, instead of trying to figure out which blocks belong to a file, the disk level algorithm just tries to reorder blocks on the SSD according to how many contiguous read accesses it sees after writing.
now, consider the pattern of writes generated by a torrent. bittorrent clients typically allocate enough space for the entire file via the OS, and then write blocks of the file in a completely random order. an OS level algorithm can see the file and knows that the blocks belong to the same file but are being written out of order, so it can just fill in the blocks of the file on the SSD as they are written and get contiguous block placement. a "disk level" algorithm has no idea that the torrent process is writing to a single file out of order and writes the blocks in a last-in-first-out order to the SSD. however, if you then read from the file over and over again in order, the disk level algorithm will eventually place everything in order on the ssd to maximize performance.
so in short, the answer is yes, it does screw up certain kinds of ssd caching, but eventually, all methods of ssd caching will converge on the same performance if the file is read in order many times.