Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Design constraints for blending in next-gen API backends #1030

Open
bvssvni opened this issue Feb 1, 2016 · 0 comments
Open

Design constraints for blending in next-gen API backends #1030

bvssvni opened this issue Feb 1, 2016 · 0 comments

Comments

@bvssvni
Copy link
Member

bvssvni commented Feb 1, 2016

This issue is describing the inherent constraints of immediate design for blending, using next-gen graphics APIs backends.

Next-gen APIs are designed for knowing the blending equation upfront

  1. Next-gen graphics APIs, such as DirectX 12, uses a Pipeline State Object (PSO) model which is designed to reflect the hardware state on the GPU, in order to reduce CPU overhead.
  2. The blend equation state, but not blend reference values, are part of the PSO instance initialization. It is not possible to change the blend equation without switching the instance.
  3. Because of 2) you need to know the blending equation upfront to avoid overhead in immediate design.

Using shader blending is difficult

One might think that since using the fixed hardware pipeline for blending requires separate PSO instances, perhaps one could do it in the fragment shader, using a single instance?

Another benefit with using a fragment shader: Most graphics cards lack the "both" term in the blending equation for fixed hardware pipeline. This prevents you from doing full Porter/Duff blending. It also prevents you from doing multiplicative blending with pre-multiplied alpha in textures, or doing multiplicative and alpha blending at the same time.

However, this does not work, because reading from the frame buffer is undefined operation in some shader languages, among them is GLSL.

For example, in MSAA you sample the same pixel multiple times, but the writing order is not defined. So, the destination color has no defined value in the fragment shader. The behavior depends on the implementation.

Tradeoffs

Because of all of the above, we have to make one of following tradeoffs:

  1. Porter/Duff blending on the CPU only for software rasterizer backend, ignored by GPU backends
  2. Create multiple instances of PSO upfront, and select a set of blending effects (can not be interpolated, but this allows different fragment shaders)
  3. Limit the immediate API to a single blending equation (most likely alpha blending)

The downside with alternative 1) is difference in behavior when targetting CPU or GPU. This is probably not acceptable.

The downside with alternative 3) is maintaining two different APIs for CPU and GPU. This is most likely not an acceptable solution at this point.

This means we are left with alternative 2). We could replace the blending equation with an enum of pre-configured names. The Gfx backend would then use the same names for each PSO instance.

@bvssvni bvssvni changed the title Constraints of immediate design for blending in next-gen graphics APIs Design constraints for blending in next-gen graphics APIs Feb 1, 2016
@bvssvni bvssvni changed the title Design constraints for blending in next-gen graphics APIs Design constraints for blending in next-gen GPU backends Feb 1, 2016
@bvssvni bvssvni changed the title Design constraints for blending in next-gen GPU backends Design constraints for blending in next-gen API backends Feb 1, 2016
This was referenced Feb 2, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant