Uhh a little yes but lots of no, no, no.
Look tp I know what you're thinking, its basically what that James Freeman in the madVR doom9 thread is saying. But your reasoning is way off and wording is very clumsy, leading almost all of your claims being wrong.
I was replying here and noticed madshi has already responded to James so have a look at his response. I'll just put the key things via a different angle to help people understand and stop misinformation:
1. Jinc upscaling does not blur out detail. Or soften detail . Or destroy it. It just avoids creating aliasing in the new pixels that it creates. The detail is still there. Remember your source image is like 50px. You're upscaling to 200px. It is creating the balance. Also for same reason please don't say that Jinc upscaling removes aliasing. Its not removing anything.
2. Agree it can look softer, that is because it doesn't create aliasing. Aliasing by its nature will make things look sharper, but hey here you have an artifact. A 'sharp' artifact.
3. Your test scenario is geared for lanczos to look closer to the original small image. Nearest neighbour probably would work well in this scenario too, but would you use that for real video content? Leads to..
4. Real world content does not consist of straight lines with 1 colour value and a background of another colour value. There are many smooth colour transitions and smooth gradients. Look around you at some lines, are any aliased? No.
5. For real world content, Jinc3 AR has the edge over Lanczos (a slight edge only). If you primarly watch lines of 1 colour value, then lanczos might have the edge. The good thing is, you can choose.
6. Don't bring in source image arguments at all. You say that we should preserve information from the original bluray. So if that image you provided was downscaled , how was original image created? Which algorithm, and what artifacts did that introduce / details did that destroy? Was there aliasing in the original bluray image or not? Because if not, then you shouldn't be having aliasing the upscaled image and your point is invalidated anyway.
And then some kickers:
I know lots of people there like to compare the upscalers here and there and all that but let's put it into perspective:
7. Most content I watch is already at 1080p (i.e. NO luma upscale)
8. The other content is generally 720p that gets upscaled to 1080p (0.67x). Its nothing like tests where a 50p image is scaled to 200p (4x). This drastically decreases the importance/effect of algorithm choice.
oh and 9. Most people don't have a big enough TV / sit close enough to it to be able to physically see the difference.
So! Better off putting efforts into:
- eliminate judder / timing issues
- get a calibration device and learn how to calibrate & profile with Argyll
because these benefits are HUGE!