I loved the article that this links to about 'Jump Flood Algorithm'!:
https://bgolus.medium.com/the-quest-for-very-wide-outlines-b...
So fascinating! Thanks for indirectly leading me to this! I love thinking about all the various approaches available at the pixel/texel/etc level!
It's also another case where it's a very clever way of generating a type of SDF (Signed Distance Field) that is doing a lot of the heavy-lifting. Such a killer result here as well! Any-width-outline-you-like in linear time?!! Amazing when compared to the cost of the brute-force ones at huge widths!
I wholeheartedly endorse SDFs, whether they are 'vector' ones, function-based, like Inigo Quilez's amazing work, Or 'raster' ones like in the article, texel-or-voxel-based. Houdini supports raster-SDFs very well I think, has a solid, mature set of SDF-tools worth checking out (there's a free version if you don't have a lic)!
And of course there's all the many other places SDFs are used!! So useful! Definitely worth raising-awareness of I reckon!
Note that the 'Jump Flood Algorithm' is O(N log N) where N is the number of pixels. There is a better O(N) algorithm which can be parallelized over the number of rows/columns of an image:
https://news.ycombinator.com/item?id=36809404
Unfortunately, it requires random access writes (compute shaders) if you want to run it on the GPU. But if CPU is fine, here are a few implementations:
JavaScript: https://parmanoir.com/distance/
C++: https://github.com/opencv/opencv/blob/4.x/modules/imgproc/sr...
Python: https://github.com/pymatting/pymatting/blob/afd2dec073cb08b8...
Dang, I implemented SDFs in 2023 around that time with jump flooding. Wish I had seen this version. Thanks for pointing it out!
In my own projects I use JFA/SDF-based outlines the most because of their quality as well as the possibility to render distance-based effects like pulsating outlines.
This (https://x.com/alexanderameye/status/1663523972485357569) 3D line painting tool also uses SDFs that I then write to a tiny texture and sample at runtime.
SDFs are very powerful!
SDFs work great as the structure for circle marching which is one approach to accelerating global illumination to the point of making it real-time, with limitations. The same approach cab be extended to use radiance cascades making it even faster! Pretty fun.
One day, I'd love to dive into stylised 3D graphics as an R&D project. There's been decent progress recently, but I think there's a lot of low-hanging fruit left to pick.
Some open questions:
- How do you reduce the detail of a toon-rendered 3D model as the camera zooms out? How do you seamlessly transition between its more-stylised and less-stylised appearance?
- Hand-drawn 2D animations often have watercolour backgrounds. Can we convincingly render 3D scenery as a watercolour painting? How can we smoothly animate things like brush-strokes and paper texture in screen space?
- How should a stylised 3D game portray smoke, flames, trees, grass, mud, rainfall, fur, water...?
- Hand-drawn 2D animations (and some recent 3D animations) can be physically incorrect: the artist may subtly reshape the "model" to make it look better from the current camera angle. In a game with a freely-moving camera, could we automate that?
- When dealing with a stylised 3D renderer, what would the ideal "mesh editor" and "scenery editor" programs look like? Do those assets need to have a physically-correct 3D surface and 3D armature, or could they be defined in a more vague, abstract way?
- Would it be possible to render retro pixel art from a simple 3D model? If so, could we use this to make a procedurally-generated 2D game?
- Could we use stylisation to make a 3D game world feel more physically correct? For example, when two meshes accidentally intersect, could we make that intersection less obvious to the viewer?
There are probably enough questions there to fill ten careers, but I suppose that's a good thing!
> Hand-drawn 2D animations often have watercolour backgrounds. Can we convincingly render 3D scenery as a watercolour painting? How can we smoothly animate things like brush-strokes and paper texture in screen space?
There are various techniques to do this. The most prominent one IMO is from the folks at Blender [0] using geometry nodes. A Kuwahara filter is also "good enough" for most people.
> When dealing with a stylised 3D renderer, what would the ideal "mesh editor" and "scenery editor" programs look like? Do those assets need to have a physically-correct 3D surface and 3D armature, or could they be defined in a more vague, abstract way?
Haven't used anything else but Blender + Rigify + shape keys + some driver magic is more than sufficient for my needs. Texturing in Blender is annoying but tolerable as a hobbyist. For more NPR control, maybe DillonGoo Studio's fork would be better [1]
> Would it be possible to render retro pixel art from a simple 3D model? If so, could we use this to make a procedurally-generated 2D game?
I've done it before by rending my animations/models at a low resolution and calling it a day. Results are decent but takes some trial and error. IIRC, some folks have put in more legwork with fancy post-processing to eliminate things like pixel flickering but can't find any links right now.
The best results I’ve seen at procedurally generated “old school” style pixel art sprites have come from highly LORA-ed diffusion models. You can find some on Civit AI.[1]
So the future here may be a 3D mesh based game engine on a system fast enough to do realtime stable-diffusion style conversion of the frame buffer to strictly adhering (for pose and consistency) “AI” pixel art generation.
[1] https://civitai.com/search/models?sortBy=models_v9&query=Pix...
ah that's very interesting. Will save that link for future reference.
Not 3D, but Stamen (a data visualization and cartography studio) made a beautiful watercolor map renderer:
> Would it be possible to render retro pixel art from a simple 3D model?
Not exactly retro pixel art, or maybe it is since it's been 25 years (omfg) but in Commandos 2+ we had 3d models for the characters, vehicles, etc which we rendered at runtime to a 2d sprite which we then mixed with the rest of the pre-rendered 2d sprites and backgrounds.
A more modern example would be Dead Cells
One thing I've had to do while working on my 3D pixel art game is change the size of the pixels as the camera zooms out.
Low res: https://x.com/Navy_Green/status/1525564342975995904 Stabilization: https://x.com/Navy_Green/status/1693820282245431540
Pixel art from 3d models: see Dead Cells. https://www.gamedeveloper.com/production/art-design-deep-div...
The future of 3D graphics is going to be feeding generative NNs with very simple template scenes, and using the NN to apply almost all the lighting and styling.
It's kind of ridiculous that this occurs just as the dream of raytracing hardware approaches viability.
It's kind of ridiculous that this occurs
It hasn't 'occurred' at all. People extrapolated what they saw in the 50s to cars the size of houses, personal flying cars and robot helpers too.
There is certainly some erroneous ‘extrapolation’ going on here.
Right, that's why I pointed it out. There isn't any evidence this is something people even want to happen, let alone that it will.
No, it is you that doesn’t want it to happen and so are having a strangely extreme reaction to a simple observation.
Apparently you are not alone in that.
It isn't a "strangely extreme reaction" to say that something that hasn't happened hasn't happened.
Assuming English is not your first language your reaction is extreme for two reasons: firstly I was clearly referring to the future, and secondly it is happening anyway.
Sadly for you AI griefer bots are a thing, so that side of your reason to exist is also under threat, but you can deny the existence of those too if it will make you feel better.
I was clearly referring to the future
You said "It's occurring" which is the present.
Sadly for you AI griefer bots are a thing, so that side of your reason to exist is also under threat, but you can deny the existence of those too if it will make you feel better.
What is this supposed to mean? You think pointing out that was you said is happening right now isn't actually happening is 'griefing' you? You aren't being persecuted by someone replying to you. You can always avoid saying things that aren't true or give evidence that they are.
If you show me some sort of realtime hallucination that takes rough renders and outputs temporally coherent images in 16ms or less I'll say that you are right that this is happening.
> You said "It's occurring" which is the present.
Hallucination.
You said "It's kind of ridiculous that this occurs". This is the present tense.
I don't think you're hallucinating though, I think you just got mixed up with thinking a wild extrapolation was automatically coming true right now.
> You said "It's kind of ridiculous that this occurs". This is the present tense.
LOL! Convincing griefing requires a slightly larger context window!
This seems like you're trying to distract from backing up what you said with evidence. I think if you had some evidence you would have linked it already.
Ha! No troll ever resorted to asking for citations.
The question is what for? The original claims:
> The future of 3D graphics is going to be feeding generative NNs with very simple template scenes, and using the NN to apply almost all the lighting and styling.
> It's kind of ridiculous that this occurs just as the dream of raytracing hardware approaches viability.
Or what you extrapolated that to in your imagination:
> If you show me some sort of realtime hallucination that takes rough renders and outputs temporally coherent images in 16ms or less I'll say that you are right that this is happening.
These are not the same. If you think so you have serious comprehension problems.
You're a simple troll arguing against things you've imagined. Get back under your bridge.
I just asked for evidence and you instead of showing anything you go to name calling.
I just asked what you wanted evidence for, and you resorted to claiming persecution.
You go by the name CyberDildonics. You claim to think “is occurring” and “occurs” means the same thing, so your ability to understand is clearly limited. The world does not owe you an explanation just because you want one, and insulting those that point this out is classic trolling, so the label is deserved.
I started my career working on VR apps and soon pivoted to webdev due to the better market.
Articles like this one make me miss the field - working with 3d graphics, collisions, shaders, etc. had a magical feeling that is hard to find in other areas. You're practically building worlds and recreating physics (plus, math comes up far more practically and frequently than in any other programming field).
I know exactly what you mean
I went the other way from webdev to working in games and in my experience it really is as fun/interesting as it sounds, the satisfaction of the work is so much higher and the ceiling/depth of the topic is very high.
Been doing it for 4 years so far and I've never hit a wall of boredom like I did in webdev
Nothing beats coming in to work on monday, opening up the engine editor, seeing the mini world you're working on being rendered, and thinking about what cool feature you'll add next
How were you able to make the transition?
Yeah, the ceiling part is key for me.
There are interesting challenges in web dev, but it's mostly related to scale, architecture and organizational complexity - realistically, no one is going to have their mind blown reading your loginController.
Game programming does have a lot more space for wizardry, you can code a shader or a mesh splitting algorithm that feels like black magic to others, and it's just you with a code editor.
There are still many reasons for me not to regret my move, mostly related to the realities of the market - lower salaries, crunch, the seasonal/project based employment, limited choice of OS/dev tools, etc.
But credit where credit is due, that field is super fun.
Personally I think opening up steam and see your game (shipping) can beat opening up the editor but I agree that it's a very fulfilling industry.
Well, in web dev, people vacuuming up all of your code to feed a NN creates something that eliminates the tedious annoying parts and leaves the more interesting work. Conversely, even in the technical end of art, people vacuuming up all of your output to feed a NN eliminates the interesting, fulfilling parts and for professional tasks, leaves the existing tedious annoying parts, and even adds a few more, while devaluing the entire skillset. And then everyone in tech pretends they know more about your job than you do, and calls you a jerk for not being happy that SV companies killed your job market by offering C- results for F prices without the inconvenience of any of the people that created the initial “data” getting paychecks. So, considering the pros and cons, I think you made the right choice.
For my game Astral Divide, I ended-up making a technique that is not listed.
It's similar to the one described as "Blurred Buffer", except that instead of doing a blur pass, I'm exploiting the borders created by the antialiasing (I think via multi sampling, or maybe texture filtering).
I draw the object in plain opaque white on a transparent black background, and in the fragment shader I filter what does not have a fully opaque or transparent alpha channel (according to some hardcoded threshold). It gives decent enough results, it's cheap performance-wise and is very simple to implement.
Tricks like that are just so satisfying. I remember in a video editing software I was working on, I was tasked to implement the Crop tool, and there was a requirement to temporarily blur everything outside of the cropped area during editing. I didn't want yet another expensive blur pass, so all I did was just change the mipmap bias so that a lower-res texture rendered, and texture filtering did all the work for free. I then compared my result to a similar "blur" effect in Powerpoint (when cropping, too), and it looked the same (even better - Powerpoint had a slight color banding, and my impl didn't).
Another similar trick - we didn't have full res antialiasing for some reason (performance?), and most of the canvas was just a bunch of 2D rectangles (representing video frames), however they could be rotated, and aliasing was visible along the edges. Instead of enabling full screen antialiasing I just extruded all quads a little bit, while proportionally shrinking UV coordinates - so that the visible "edge" was inside the actual 3D quad, and texture filtering, again, did all the work for free :)
Wouldn’t that cause aliasing if you set those pixels to a solid color? Or do you keep the alpha and set a color so the outline is faint but not aliased?
I'm reusing the alpha channel indeed. And I reverse it for the innermost side of the border.
So if the border goes like transparent->opaque, I divide it into segments using a threshold (transparent->min_threshold->max_threshold->opaque) and change the alpha values:const float MIN_ALPHA_THRESHOLD = 0.3; const float MAX_ALPHA_THRESHOLD = 0.7; [...] if (fragmentColor.a < MIN_ALPHA_THRESHOLD) { // Outer edge of the border float outputAlpha = fragmentColor.a / MIN_ALPHA_THRESHOLD; outputColor = vColor * outputAlpha; } else if (fragmentColor.a > MAX_ALPHA_THRESHOLD) { // Inner edge of the border float outputAlpha = 1 - ((fragmentColor.a - MAX_ALPHA_THRESHOLD) / (1 - MAX_ALPHA_THRESHOLD)); outputColor = vColor * outputAlpha; } else { // "Inside" of the border outputColor = vColor; }
- Smoothen out the transparent->min_threshold segment, so that it goes linearly from a=0 to a=1
- make opaque (a=1) the min_threshold->max_threshold segment
- Revert and smoothen out the max_threshold->opaque segment so that it goes linearly from a=1 to a=0
Interesting! I’ll try it out. Thanks for sharing.
These are great notes!
When looking into the edge detection approach recently, I came across this great method from the developer of Mars First Logistics:
https://www.reddit.com/r/Unity3D/comments/taq2ou/improving_e...
That is excellent and the result can be very pleasing as this render in the article : https://ameye.dev/notes/rendering-outlines/edge-detection/co...
It looks like a frame from dutch comic book Franka !
It’s a recreation from a panel of Tintin The Black Island!
What a phenomenal article and reading UX.
Explaining a difficult concept in terms anyone can understand. Great diagrams and examples. And top marks on readability UX for spacing and typography.
OP, what inspired you to create your current theme? Have you ever considered creating an engineer-focused publishing platform?
Technical art is definitely my first love in software. I'm excited for godot to add an easier compute shader pipeline for post processing effects - their current compositor plugin set up is a bit boiler plate intensive.
this repo is a great example of post processing in godot: https://github.com/sphynx-owner/JFA_driven_motion_blur_demo
I remember the first time I saw this effect was on Wacky Races on the Dreamcast.
I remember at the time there was a lot of PR around this being the first game to introduce that effect and how the developers basically invented it.
I can’t comment on whether that was actually true or just PR BS, but it was definitely the first time I experienced it as a gamer.
I suspect a lot of people 'invented' the effect at approximately the same time. Honestly, the Dreamcast was the first piece of hardware really capable of doing the effect to a high level of quality in real-time.
I developed the cel shading effect for the Dreamcast game 'Looney Tunes: Space Race' (developed by Infogrames Melbourne House) literally during the first week we had access to a Dreamcast development kit. Infogrames Sheffield (devs of Wacky Racers) were shown an early version of our implementation, and added the similar effect to their game. It looked great, but went into their game pretty late in production, so the game hadn't really been optimised for it the way that ours was.
And the folks behind Jet Grind Radio came up with the effect on their own as well, and beat both of us to market. They were using exactly the same algorithm, but were using it in a very different way; they were fully embracing and leaning into the uneven, wide and jagged outlines, where Sheffield and we were fighting against them and trying to match a more uniform and traditional art style.
And then only about a year later, somebody seemed to have figured out how to make the edge-detection cel shading approach work in real-time on Xbox, for the game "Dragons Lair 3D". I had done a test implementation of that approach on the Dreamcast, but it wasn't nearly performant enough for us to run it on multiple characters at once while playing a game too! Not sure whether it was due to the Xbox being more powerful or them just having a smarter algorithm than mine, but you can't argue with their results! If you're making a game that you want to look like an actual hand-drawn cartoon, that is still absolutely the best quality way to do it, IMHO.
Someday I'll find an excuse to try my hand at implementing one of those again. Performance shouldn't be a problem at all any more, I imagine!
For a game I haven't thought about in nearly 25 years, I want you to know I instantly remembered how Looney Tunes: Space Race looked.
Thanks for sharing that. It was a great read.
Apparently it released simultaneously with the (more famous) Jet Set Radio, also for Dreamcast, which had similar effects. Quite the coincidence.
Not quite as much of a coincidence as you might think. Many of these algorithms come from graphics researchers presenting at SIGGRAPH (https://www.siggraph.org, the leading conference).
So if so if Jet Set Radio was released June 2000, you can look for related papers a couple years before to see if new techniques were appearing. And, in fact, they were!
Disney paper (1998) on texture mapping for cell shading (the color of a cartoon):
https://media.disneyanimation.com/uploads/production/publica...
NYU paper (1998) applying outlines to 3d objects (the black outline of a cartoon) :
https://mrl.cs.nyu.edu/publications/npr-course1999/hertzmann...
Interesting. There actually are various recent hobbyists who have achieved these effects even on an N64 due to the quite programmable hardware. But if the basic idea or concrete algorithms were only invented in 1998, it makes sense that contemporary games didn't use it until the Dreamcast was out.
The N64 was also the most powerful console in its generation, so really it’s the next most powerful before the Dreamcast.
It only looked like crap in its era because carts were expensive compared to CDs. Which is less of an issue now.
Also the hardware antialiasing and overuse of fog didn’t help its case. Thankfully the former can be fixed either via hardware mods or emulation.
I’d be still be interested to see if those demos you saw were full games or not. I’ve seen a lot of cool effects in games get abandoned because they didn’t scale well to a fully fleshed out game.
I NEED to get into shader programming and 3D rendering.
Articles like this are awesome, I wish I could actually write a shader.
I learned using Shadron 8 or so years ago.
The edge detection part reminds me a lot of the game Rollerdrome
https://store.steampowered.com/app/1294420/Rollerdrome/
I wonder if they used something like that
They used edge detection with a custom input buffer, a bit like this
Ah neat
Great article by the way OP
I’d render the model to a buffer using a single colour (no shading, lighting or texturing), then render the buffer with edge detection. This gives an outline with one additional render pass.
Suprised this isn’t obvious.
Funny, I was just playing around with some of these techniques yesterday for my game https://roguestargun.com now that I got it running reasonably comfortably at 72 fps on the Oculus Quest 2.
Mostly due to laziness, as a cell shaded look requires less retexturing for my game than simply creating proper PBR materials.
The inverted hull method + cell shaded look I initially used however actually really does have quite a performance hit.
Quite like the techniques used to make cel-style graphics using the usual 3-d pipeline as seen in quite a few Nintendo games like
https://en.wikipedia.org/wiki/Pok%C3%A9mon_Sun_and_Moon
and also used to make other illustration-like styles such as
A simple technique not listed here for drawing contour edges:
1) Create an array storing all unique edges of the faces (each edge being composed of a vertex pair V0, V1), as well as the two normals of the faces joined by that edge (N0 and N1).
2) For each edge, after transformation into view space: draw the edge if sign(dot(V0, N0)) != sign(dot(V0, N1)).
Great breakdown of outline rendering techniques! The detailed explanations and code snippets are super helpful for understanding Unity's shader possibilities. Thanks for sharing!
Side note, whatever happened with the great Unity pricing debacle? Did developers end up moving en masse to Unreal and Godot? Or were Unity’s walkbacks and contrition sufficient to keep it a going concern?
They reversed it, I don’t think a huge amount of people changed over but definitely substantial! Godot has been growing quickly.
Eh, some people moved, but probably not many people that were knee deep in a project. I'm sticking with Unity for now myself - for all of its annoying eccentricities and bad developer relations it fits in pretty snugly between the power and complexity of Unreal and Godot.
Duplicate the polys, vertex scale by normal of outline, invert normals, draw in black? no shadow pass?
edit: turns out there is more .. thanks