New AA tech /g/amers

New AA tech /g/amers
research.nvidia.com/publication/2018-08_Adaptive-Temporal-Antialiasing
We designed our method for compatibility with conventional game engines and to harness the strengths of TAA, while addressing its failures unequivocally and simply. The core idea is to run the base case of TAA on most pixels and then, rather than attempting to combat its failures with heuristics, output a conservative segmentation mask of where it will fail, and why. We then replace the complex heuristics of TAA at failure pixels with robust alternatives, adapting to the image content.

We implemented the ATAA algorithm in Unreal Engine 4 and gathered results using Windows 10 v1803, Microsoft DXR, the NVIDIA RTX enabled 398.11 driver, and an NVIDIA Titan V GPU. To demonstrate the image quality achievable with ATAA, Figure 3 shows a comparison of ATAA and other common antialiasing algorithms used in games, zoomed to challenging areas of the scene.
The No AA row demonstrates the baseline aliasing that is expected from a single raster sample per pixel. The FXAA and TAA rows are the standard implementations available in UE4. SSAA 4x is 4× supersampling. We show the segmentation mask and three variations of ATAA with 2, 4, and 8 rays per pixel. Since the drawbacks of standard TAA are difficult to capture in still images, and all images in Figure 3 come from a stable converged frame, the supplemental video provides a more faithful comparison between standard TAA and ATAA in practice, including motion artifacts.

On an NVIDIA Titan V GPU at 1920×1080 resolution, ATAA runs in 18.4ms at 8× supersampling, 9.3ms at 4× supersampling, and 4.6ms at 2× supersampling for the image in Figure 1. This includes the creation of the standard TAA result, our segmentation mask, and adaptive ray tracing (including 1 shadow ray per light per primary ray). For the view shown in Figure 3, 107,881 pixels are selected for adaptive ray tracing, representing 5.2% of the total image resolution.
The specific number of rays identified for antialiasing varies per frame according to the segmentation mask. In addition, the FXAA pass adds as much as 0.75ms when the whole frame is new, but in practice scales linearly down to 0 as fewer of the pixels are identified for FXAA in the mask. Under typical camera motion fewer than 5% of pixels are selected for FXAA. Our ATAA solution integrated successfully operates within the 33 millisecond frame budget for a typical UE4 frame across all settings. Operating within a total frame budget of 16ms, while also ray tracing 1spp shadows at screen resolution, is possible with the 2× and 4× ATAA variants. As DXR is an experimental feature of Windows 10 v1083, we are optimistic that performance will improve as the runtime and driver receive important release optimisations.

Attached: ClipboardImage.png (940x264, 344.59K)

Other urls found in this thread:

web.archive.org/web/20070216052644/https://www.maxmax.com/hot_rod_visible.htm
youtube.com/watch?v=rFqZvzcarrs
twitter.com/AnonBabble

This thread is going to hit bump limit, isn't it?

Attached: 297.png (426x477, 261.62K)

Enjoy your reduced detail and flickering pixels.

Attached: Untitled.png (755x352, 177.08K)

Thx, I do.

Attached: 4.png (384x272, 5.24K)

Is there literally any pixel art game that has programmatic antialiasing?

Maybe some modern indie games have that. But in every old game I played, any AA was simply drawn in by the artists, directly into the sprite frames or the background. No need to make it complicated and waste CPU cycles. And frankly, I think the old sprite games look and play much better than anything after the mid 90's (when 3D shit started being shoved into everything).

Attached: amiga-game-screenshots1.jpg (1972x984, 677.35K)

...

Most of the 8-bit and Amiga games I played were just hand-drawn pixels. The rest were flat-shaded polygons, or even wireframe.
I guess some pre-rendered stuff was common in DOS for games like Doom though, but I thought you were talking about the game engine doing the antialiasing. Anyway I'm not a fan of FPS or games like Myst. I even stopped buying games altogether after Commodore went out of business.

Attached: Loom_1.png (320x256, 8.44K)

do you even know what you're talking about?

Attached: Donkey_Kong_Country_Shot_2.png (294x224, 26.46K)

Always improving the image, never bothering to work on the picture.

Doom sprites were real models captured on camera, so in a way they're pre-rendered.
And I never liked the SNES. Never owned one, don't even collect its roms. The last console I bought was a Sega Genesis. Get it through you r head already: I DON'T LIKE YOUR FUCKING FANCY ASS GRPAHICS SHIT.

That's digitization or rotoscoping, depending on how they animated them.
I don't care about your preferences, I want you to stop being an embarrassment for Zig Forums and start using the correct terms

On a related note, I remember years ago reading about silhouette tessellation, where edge-detection is used to prioritize silhouettes of geometry for additional polygons (possibly actual 3D polygons, maybe some strange sort of 2D processing, but it looked natural with shading/textures/etc). The justification for this was the fact that the centers of objects (especially in the case of more modern shading techniques) are difficult to spot vertex boundaries and other polygonal artifacts in, whereas the outermost silhouettes of objects are the areas such artifacts are most visible in.

Has anything come of this? In a quick search just now, I see a lot of papers about something implemented as a shader in modern engines, but I can't see any wireframe images that indicate a reduction of polygons in the middle of objects and an increase in granularity toward the edges.


Wouldn't that sort'a make it not pixel art by definition? The closest to AA you could come would be alpha channel support, allowing you to use baked-in AA (in addition to translucency) on each sprite. Earliest example that jumps to mind (though it's prerendered 3D) is Starcraft.

Attached: polygon.jpeg (1024x1024, 84.52K)

It was digitization, although not all Doom sprites were based off models. Several were hand-drawn.

Also everyone likes sucking Adrian Carmack's dick but the best models were made by Gregor Punchatz who only had a footnote in Doom history.

Attached: bONgrZm.png (728x588, 303.9K)

I have never cared for anti-aliasing and always seen it as a useless resource hog.

Turrican is a crappy game and you fucking know it.

Idiot with poor short-term memory, you wrote earlier
Then you whip out some SNES title like it means something to me. All this time you're >implying I care about your retarded rendered graphics shit. Well I don't give a rat's ass, and never did. I stopped playing games when they moved away from hand-drawn sprites. The new games are nasty! I'd rather play Terry's collection of TempleOS games than any of that shit.

Also digitization will by its nature create anti-aliasing. Look at this fucking picture of a digitized girl. Zoom in closely so you can see the pixels. Now look at the countour between her skin and the blue wall. It's not a straight color, is it? No, some blue pixels are in that zone. Looks like you don't even know wtf you're talking about and should feel embarased!


No, I don't fucking know it. I never played Turrican. Out of all those, I only played the top-left game (Lionheart).

Attached: Maria's_Christmas_Box_1.png (320x256, 27.7K)

Don't a lot of Euros actually like Turrican though? Compared to Shadow of the Beast where they like its visuals but hate the game.

False, very severe aliasing occurs in pixel-based cameras and scanners due to their discrete sensors, causing jaggies, moiré, and other artifacts, especially at higher magnification/lower resolution. While properly tuned software AA is the best way to deal with it, some newer devices, including most consumer-grade digital cameras, obviate this using crude optical blur filters in front of the sensors to add simple anti-aliasing:
web.archive.org/web/20070216052644/https://www.maxmax.com/hot_rod_visible.htm

As for your photo, I'm sure you're aware it's 8-bit, almost certainly scanned at a much higher resolution than appears onscreen, and the dithering is so severe that it leaves a 2-6 pixel fringe of dappled fuzz around every detail.

Nigger wat even. The picture is 320x256 which is exactly the common Amiga resolution. And yeah it's 8-bit. How many fucking colors do you think an OCS Amiga can display at one time at that resolution (protip: it's a power of two and less than 64). This is exactly the picture as rendered by UAE. Now suck it, faggot.

...

Who says he's using a scanner that's making a high resolution file? That game is from 1988, and Newtek had this hardware already available for a couple years by then, where you can just capture at the exact resolution you want:
youtube.com/watch?v=rFqZvzcarrs

I always scanned at least 2x the size I needed back then for the exact reason I just said (in addition to scanning at a higher depth to produce an optimum palette), besides which, as I said, that image is so badly dithered that whatever aliasing may have existed is unintelligible.

You don't need to write me a wall of text to show how little you know about digital images.

are you insane, its heavier than HBAO+, why the fuck would anyone use if you can just work a little more on your TAA solution and mix it with small amount MSAA with like 10x lower cost

Wow what the fuck. I'll take my 16x MSAA thank you.

It's a good point per se but in practice it only applies to games where you don't need to pick apart individual pixels on a 4k screen in order to identify your targets. I.e. when aliasing impacts object legibility negligibly then that's OK to have rough polygon edges. When you do shit like sniping people all the way across the map using ironsights, when your target is smaller than the ironsight front post, then anti-aliasing starts to matter.

Is there even a point to AA with our high resolution screens? I always turn it off and the difference is negligable.

>>>/g/

/g/et the fuck out

sage