Unity Version: 2021 LTS

Setting up for a URP Fullscreen Outline Effect

Edit: This was originally to be part one of a two parter but I actually decided I don't like technical writing so I've edited it to be a Ì useful resource.

Continuing the emerging theme of 'do a tutorial and then write about it' except this time I did actually have to figure some things out myself, very exciting!

I knew very little about doing an outline shader. I knew the theory behind how to create a 'toon' shader style effect because I got to see that at Gravity Sketch but a full screen effect was new to me and I was curious about the implementation. Turns out it's actually pretty straightforward with maybe one or two caveats.

The tutorial I was following this tutorial from Roystan.net, this was done in Unity 2018 and the BiRP so I knew I would have a bit of extra work to do but after doing the Custom SRP tutorials I was feeling more confident on my shader/SRP abilities. If you want to follow along to my slightly modified version of this tutorial you can find the complete project here or you can download the skeleton project here. I specifically used Unity version 2021.3.13 but you should be able safely use any 2021 LTS version.

In this part one I'll be setting up all the scripts and the shader we'll use so that we can actually perform a full screen effect as, at the time of writing, URP does not have a simple way of adding a post processing layer like what exists in the Post Processing v2 package.

The skeleton project from the original tutorial is quite fleshed out and so mine is quite a bit more barebones but that's personal preference on my part. All that is set up is the Universal RP package is installed and the graphics and quality settings have been updated to use the renderer assets. By default URP generates different quality levels for the assets but we don't need it so I deleted those and just use the one asset.


Immediately the tutorial starts with a premade project that has a custom Post Processing layer with a Post Processing volume and a Post Processing profile. None of which we are using in URP because URP uses the Volume component to apply post processing effects.

As mentioned, in URP there isn't currently a way to create custom Volume components but we can still create a full screen effect by injecting ScriptableRenderFeatures into the URP pipeline before/after any specific pass. This is a great substitute for what we're trying to achieve with a full screen effect but you lose the flexibility of making post processing volumes a fixed size in world space which you could use to create trippy scenarios.

To be fair even when you can create custom volume components you might still prefer to use this solution as it gives you the flexibility of deciding when it should run in the pipeline rather than being stuck in the PostProcessing pass.

A high level overview of what we want to achieve can be broken nicely into the following steps:

  1. Read in the different buffers we need to do edge detection
  2. Perform edge detection
  3. Render output to a full screen sized quad
  4. Profit?

If you look at the Settings/URP Forward Renderer asset in the inspector then you can see at the bottom there is a button that says "Add Renderer Feature". If you click this then you can see a list of Renderer Features you can add that are built in with the URP package. We want to create our own one of these!

Open up your favourite code editor and create a file named OutlineRendererFeature.cs inside the Scripts folder. Make a class inside this folder with the same name as the file and make the class inherit from ScriptableRendererFeature. If your code editor is nice it should automagically import the required usings and it will yell at you that you haven't overriden the methods marked. If your editor isn't nice then this is the file you should have and you shouldn't feel bad about copy and pasting it!

using UnityEngine;
using UnityEngine.Rendering.Universal;

public class OutlineRendererFeature : ScriptableRendererFeature
{
    public override void Create()
    {

    }

    public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
    {
        
    }
}

Now we can go back to out Settings/URP Forward Renderer asset and we can see that inside the Add Renderer Feature list the new class we made appears! Click on it and you'll see nothing changes but that's because we're not doing anything inside this feature. To affect anything we need to impelment a Render Pass.

Create another script named OutlinePass.cs. This class should inherit from ScriptableRenderPass. Once you satisfy all the errors from unimplemented member functions it should look like this...

using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

public class OutlinePass : ScriptableRenderPass
{
    public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
    {
        
    }
}

In order to enqueue the pass it needs to be created by the ScriptableRendererFeature inside of the Create() method. Create a member variable of the class and assign it as a new instance of the OutlinePass. Then to enqueue it, call renderer.Enqueue(m_OutlinePass) inside of the AddRenderPasses method.

Your code should look like this now.

using UnityEngine;
using UnityEngine.Rendering.Universal;

public class OutlineRendererFeature : ScriptableRendererFeature
{
    OutlinePass m_OutlinePass;

    public override void Create()
    {
        m_OutlinePass = new OutlinePass();
    }

    public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
    {
        renderer.EnqueuePass(m_OutlinePass);
    }
}

Fun fact: This code styling is the official Unity code styling, I don't love it 😅

While we're inside the OutlineRenderFeature class lets finish writing it as there's not a lot more to it.

Now that the OutlinePass is being executed every frame all we need to do is pass all of the user configurable values to the pass so that eventually the full screen shader can read them in and use them to modify the effect.

Any public field or [SerializeField] inside this class will be exposed in the Forward Renderer asset. In order to expose the settings that we need eventually need for the shader.

using System;
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

[Serializable]
public class OutlineSettings
{
    public Shader OutlineShader;
    public Color OutlineColor;
}

public class OutlineRenderFeature : ScriptableRendererFeature
{
    public OutlineSettings Settings;

    OutlinePass m_OutlinePass;
    Material m_Material;

    public override void Create()
    {
        if (Settings.OutlineShader != null)
            m_Material = new Material(Settings.OutlineShader);

        m_OutlinePass = new OutlinePass(m_Material,
            Settings.OutlineColor);
    }

    public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
    {
        if (renderingData.cameraData.cameraType != CameraType.Game) return;

        m_OutlinePass.ConfigureInput(ScriptableRenderPassInput.Color);
        m_OutlinePass.SetTarget(renderer.cameraColorTarget);
        renderer.EnqueuePass(m_OutlinePass);
    }

    protected override void Dispose(bool disposing)
    {
        CoreUtils.Destroy(m_Material);
    }
}

There are quite a few new things here but it's fairly straightforward. In order to apply our shader to a full screen quad, we need a material to assign it to which is created at runtime by this feature. This material and the settings for the shader are then passed into the constructor of the OutlinePass which will save both of these to private member variables.

Inside the AddRenderPasses method we make it so that this effect will only apply to the Game camera and configure the input for the pass to be the color buffer and, by creating a method insider the OutlinePass, set the output target to be the cameras color buffer.

The final new part of this is that we want to safely destroy the material when we're finished with the renderer which we can do with this built in utility CoreUtils.Destory.

Inside OutlinePass, in order to plug all of this in we need a constructor that stores the passed variables as member variables and a method that will store the current color buffer for the frame that can be reused in the Execute method.

class OutlinePass : ScriptableRenderPass
{
    readonly Color m_Color;
    RenderTargetIdentifier m_CameraColorTarget;
    Material m_Material;
    static int colorID = Shader.PropertyToID("_OutlineColor");

    public OutlinePass(Material material, Color color)
    {
        m_Material = material;
        m_Color = color;
        
        renderPassEvent = RenderPassEvent.BeforeRenderingPostProcessing;
    }

    public void SetTarget(RenderTargetIdentifier colorHandle)
    {
        m_CameraColorTarget = colorHandle;
    }

    public override void OnCameraSetup(CommandBuffer cmd, ref RenderingData renderingData)
    {
        base.OnCameraSetup(cmd, ref renderingData);
        ConfigureTarget(new RenderTargetIdentifier(m_CameraColorTarget, 0, CubemapFace.Unknown, -1));
    }

    // Execute
}

Nothing crazy is going on here. We use the OnCameraSetup event to make sure that the pass is going to be rendered to the camera color buffer.

A line worth mentioning is the line renderPassEvent = RenderPassEvent.BeforeRenderingPostProcessing. This tells the renderer where to inject our outline pass in the pipeline. In this case I've chosen before post processing so any additional effects that we might apply would also apply to the outlines but you might not want that so you can change it to AfterRenderingPostProcessing. You can see a full list of the rendering events here.

Now in the execute method we need to take the camera color buffer that was assigned earlier, create a command buffer that has the color buffer as the render target and then draw a full screen quad with our material applied to it.

public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
{
    var camera = renderingData.cameraData.camera;
    if (camera.cameraType != CameraType.Game)
        return;

    if (m_Material == null)
        return;

    CommandBuffer cb = CommandBufferPool.Get(name: "OutlinePass");
    cb.BeginSample("Outline Pass");

    m_Material.SetColor(colorID, m_Color);

    cb.SetRenderTarget(new RenderTargetIdentifier(m_CameraColorTarget, 0, CubemapFace.Unknown, -1));
    cb.DrawMesh(RenderingUtils.fullscreenMesh, Matrix4x4.identity, m_Material);

    cb.EndSample("Outline Pass");
    context.ExecuteCommandBuffer(cb);
    cb.Clear();
    CommandBufferPool.Release(cb);
}

If you have no idea what a command buffer is I wrote some words about that recently! This method starts pretty simple, make sure we're in the Game camera and a material does exist. Then grab a command buffer from the CommandBufferPool.

The BeginSample call is so we can see our pass clearly in the Frame Debugger and doesn't affect functionality.

We assign the material shader properies using the static IDs defined at the top of the file. Then set the command buffers render target to the cameras color buffer (the other parameters in this don't really matter but feel free to look them up). cb.DrawMesh is rendering the full screen quad and assigning our material to it.

Then that's it we execute the command buffer against the rendering context, tidy it up, and release the buffer back to the pool.

So you don't need to do it exactly like this. There is a cb.Blit() method that you could use but it doesn't work with XR devices and why not do it like this anyway it's not much harder.

To make sure all of this is working lets make a file called Outline.shader, I don't recommend using any of the shader options in the create menu unless you know what you're doing with converting it to an SRP shader. You can copy here for the rough outline of our shader.

Shader "RenderObjects/Outline"
{
    Properties
    {
        _OutlineColor ("OutlineColor", Color) = (0.0, 0.0, 0.0, 1.0)
    }
    SubShader
    {
        Tags
        {
            "RenderType"="Opaque" "RenderingPipeline"="UniversalPipeline"
        }
        LOD 100
        ZWrite Off Cull Off

        Pass
        {
            Name "OutlinePass"

            HLSLPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"

            struct Attributes
            {
                float4 positionHCS : POSITION;
                float2 uv : TEXCOORD0;
                UNITY_VERTEX_INPUT_INSTANCE_ID
            };

            struct Varyings
            {
                float4 positionCS : SV_POSITION;
                float2 uv : TEXCOORD0;
            };

            Varyings vert(Attributes input)
            {
                Varyings o;
                UNITY_SETUP_INSTANCE_ID(input);

                o.positionCS = float4(input.positionHCS.xy, 0.0, 1.0);
                o.uv = input.uv;

                // If we're on a Direct3D like platform
                #if UNITY_UV_STARTS_AT_TOP
                    // Flip UVs
                    o.uv = o.uv * float2(1.0, -1.0) + float2(0.0, 1.0);
                #endif
                
                return o;
            }

            float4 _OutlineColor;

            half4 frag(Varyings input) : SV_Target
            {
                return _OutlineColor;
            }
            ENDHLSL
        }
    }
}

The only couple of things I'll mention here is that because of magic matrices our vertex position is already in homogenous clip space rather than object space so no need to transform it. I'm also using a Mac which means I'm rendering with Metal so my UVs look very different to the UVs in the original tutorial so I wrote this code inside of the UNITY_UV_STARTS_AT_TOP pre processor directive to make sure my UVs are the same as in the tutorial which I assume used OpenGL or something.

Return _OutlineColor in the fragment shader and then assign this shader file to the slot in OutlineRendererFeature to hopefully see the whole screen turn whatever color is assigned in the Color property!

Now you can actually do the other tutorial...

This whole thing might feel like a bit of an indictment against URP. I can hear you saying "Really? A whole tutorial just to get to the start of the other one when it was basically for free before.". But I actually think this is worth learning anyway and also, URP has a much nicer way of gathering the normals for the edge detection then is done in the tutorial. So take that BiRP!

We've achieved steps 1 and 3 of the 4 step plan to success laid out earlier. In part two, edge detection and profit(?).

Edit: There was to be no part two enjoy figuring things out

If you like working things out for yourself then you're welcome to only read this part then you might have fun trying and figure out for yourself how to make everything work from the older tutorial.

Resources that were really helpful when doing this