Discussion:
HowTo: Surface, blending in pixel shader from point/normal vertexes?
(too old to reply)
Ola Theander
2007-07-05 09:44:49 UTC
Permalink
Dear subscribers

I'm currently working on a small prototype where the task is to render a
surface on the screen from point samples of a surface. With point samples
I mean that I have a set of data which contains xyz-values, paired with a
normal, from points belonging to a surface. The task is to render a
hole-free surface based on this data.

Working with points has certain advantages e.g. you do not have to worry
about interconnection between samples as you have to if you rendering the
surface as a mesh but instead there are some rendering problems that has
to be solved.

To each point and normal (i.e. vertex) there is also other attributes
attached such as color and radius which means that each vertex can be seen
as a colored disc in 3D-space, commonly called a Surfel. Such a disc is in
the general case rendered as an ellipse on the screen, which could cause
both holes, if the discs are positioned in such a way that they are
rendered to far away from the surrounding ellipses, or artifacts if they
are overlapping and not properly blended.

The mathematical framework is pretty well examined with a lot of theories
how to this is done in the best way. My current problems are mostly
technical and related to Direct3D and HLSL issues when trying to implement
the theory. Therefore I have a few questions that I hope somebody can help
me with:

1) The application feeds my custom vertex shader, implemented in HLSL,
with surfles i.e. a point and a normal and additional attributes. Since
there is no interconnection between surfels, the surfel set is analogous
to the PointList primitive type. This means that there is only a single
point (pixel) forwarded to the pixel shader. My question is, is it
possible to, in the shader/GPU generate a plane or a disc which is fed to
the pixel shader, i.e. not only a single point rather a small plane (with
the side = 2 * the surfel radius) which is oriented and located according
to the surfel position and normal. This plane is then shaded by the pixel
shader managing color etc. Does this perhaps require use of the geometry
shader?

2) When all surfels are rendered I'll basically have a frame which
contains a mish-mash of ellipses. The next step I would like to perform is
to blend these ellipses together to form the illustration of a smooth
continuous surface. This means that pixels that are located between
neighboring ellipses should be colored with a color that is a mix of the
color of the surrounding ellipses, weighted relative to their distance
from the pixel being rendered. This step can be called smoothing and
blending. I suppose this technique requires me to render the first step to
some kind of a buffer, similar to a z-buffer, which is then processed in a
second step that performs the blending and smoothing. Is there any
recommended way to do this, if it's at all possible with current GPUs?

3) I've seen that some uses textures to blend/shade planes. One problem
that has been mentioned is that the texture is limited to 8 bits per color
channel but I seem to recall that later GPUs support floating point
textures. Is that correct and if so, where can I find some more
information regarding this?

Thank you for your help.

Kind regards, Ola Theander
legalize+ (Richard [Microsoft Direct3D MVP])
2007-07-05 16:08:59 UTC
Permalink
[Please do not mail me a copy of your followup]
Post by Ola Theander
1) The application feeds my custom vertex shader, implemented in HLSL,
with surfles i.e. a point and a normal and additional attributes. Since
there is no interconnection between surfels, the surfel set is analogous
to the PointList primitive type. This means that there is only a single
point (pixel) forwarded to the pixel shader. My question is, is it
possible to, in the shader/GPU generate a plane or a disc which is fed to
the pixel shader, i.e. not only a single point rather a small plane (with
the side = 2 * the surfel radius) which is oriented and located according
to the surfel position and normal. This plane is then shaded by the pixel
shader managing color etc. Does this perhaps require use of the geometry
shader?
Its probably best for you to render point sprites or textured quads
yourself instead of rendering the vertices as point primitives. I
believe you could use a geometry shader for this, but if you have any
desire to support D3D9 only cards, then you have to have a fallback
path anyway.

Have you looked at the SIGGRAPH, etc., literature on forming
continuous surfaces from point samples? They have lots of really
smart algorithms which output a traditional polygon set as its output
(and many of the newer ones can be GPU-accelerated). This would seem
to eliminate alot of the questions that you have about trying to fill
gaps between points and so-on.
Post by Ola Theander
second step that performs the blending and smoothing. Is there any
recommended way to do this, if it's at all possible with current GPUs?
It sounds to me that the algorithm you're working from assumes a
software implementation where readback is trivial. Unfortunately on
most GPUs, readback kills performance but can be done. You can do
various tricks to perform post-processing in the GPU by rendering to a
texture and then using that texture as the input to another GPU pass,
but if it were me, I'd research the literature more and focus on
algorithms that digest point-sample data into traditional polygonal
surfaces. Unless you have a real-time stream of surface point
measurements that you're trying to visualize in real-time (or near
real-time), then I would just transform the points to a traditional
surface once and then perform the traditional visualization
manipulation on the resulting polygonal surface.
Post by Ola Theander
3) I've seen that some uses textures to blend/shade planes. One problem
that has been mentioned is that the texture is limited to 8 bits per color
channel but I seem to recall that later GPUs support floating point
textures. Is that correct and if so, where can I find some more
information regarding this?
This is handled in D3D by the surface format of the texture. There
are deeper pixel formats -- whats supported by your card can be found
by looking at the caps viewer in the SDK.
--
"The Direct3D Graphics Pipeline" -- DirectX 9 draft available for download
<http://www.xmission.com/~legalize/book/download/index.html>

Legalize Adulthood! <http://blogs.xmission.com/legalize/>
Ola Theander
2007-07-05 21:34:41 UTC
Permalink
Hi Richard

Thank you for your answer. You're right on the spot, it's actually
the surface splatting methods your refer to in the SIGGRAPH papers that
I'm trying to implement, with a little twist to fit my specific problem.
One thing, as you mention below, is that I receive a stream of data for
which I will, hopefully, update the model in near real-time.

The SIGGRAPH papers (Zwicker et. al.) are generally very thorough on the
math and I have a pretty good grasp on the algorithms but when it comes to
actual implementation they are pretty sparse. It's mostly the general idea
about how to implement that's presented and very few details.

I'm not very experienced with Direct3D and Shaders so I'm just
recently beginning to have a decent understanding of the technique, e.g.
what is possible to do with a certain generation of GPU hardware.

It seems like we have a common view on how to accomplish this, either by a
quad representing each surfel or maybe using the geometry shader to
generate the vertexes on the fly. Either way, it's a recognized problem
in some of the articles that they might have holes in the rendered model.

One thing that you perhaps can help me understand is that, as far as I
know, in order to engage the pixel shader on a set of screen
space pixels they must be "covered" by the interior of at least one
rendered primitive (e.g. a triangle), otherwise they are never processed
by the pixel shader. Of course it's possible to calculate the 3D position
and size of the quads (representing surfels) so the union of their
rendered representations cover every pixel of the model so that it's hole
free and is completely colorized and shaded but it's a little bit awkward.
This is an open question to me but it seems to me it would be more
suitable to blend the pixels "missed" by the projection of quads in the
pxiel shader e.g. by interpolating the properties of the surrounding
pixels/ellipses. I don't know if this is possible so I'm open for
suggestions.

Kind regards, Ola

On Thu, 05 Jul 2007 09:08:59 -0700, Richard [Microsoft Direct3D MVP]
Post by legalize+ (Richard [Microsoft Direct3D MVP])
[Please do not mail me a copy of your followup]
Post by Ola Theander
1) The application feeds my custom vertex shader, implemented in HLSL,
with surfles i.e. a point and a normal and additional attributes. Since
there is no interconnection between surfels, the surfel set is analogous
to the PointList primitive type. This means that there is only a single
point (pixel) forwarded to the pixel shader. My question is, is it
possible to, in the shader/GPU generate a plane or a disc which is fed to
the pixel shader, i.e. not only a single point rather a small plane (with
the side = 2 * the surfel radius) which is oriented and located according
to the surfel position and normal. This plane is then shaded by the pixel
shader managing color etc. Does this perhaps require use of the geometry
shader?
Its probably best for you to render point sprites or textured quads
yourself instead of rendering the vertices as point primitives. I
believe you could use a geometry shader for this, but if you have any
desire to support D3D9 only cards, then you have to have a fallback
path anyway.
Have you looked at the SIGGRAPH, etc., literature on forming
continuous surfaces from point samples? They have lots of really
smart algorithms which output a traditional polygon set as its output
(and many of the newer ones can be GPU-accelerated). This would seem
to eliminate alot of the questions that you have about trying to fill
gaps between points and so-on.
Post by Ola Theander
second step that performs the blending and smoothing. Is there any
recommended way to do this, if it's at all possible with current GPUs?
It sounds to me that the algorithm you're working from assumes a
software implementation where readback is trivial. Unfortunately on
most GPUs, readback kills performance but can be done. You can do
various tricks to perform post-processing in the GPU by rendering to a
texture and then using that texture as the input to another GPU pass,
but if it were me, I'd research the literature more and focus on
algorithms that digest point-sample data into traditional polygonal
surfaces. Unless you have a real-time stream of surface point
measurements that you're trying to visualize in real-time (or near
real-time), then I would just transform the points to a traditional
surface once and then perform the traditional visualization
manipulation on the resulting polygonal surface.
Post by Ola Theander
3) I've seen that some uses textures to blend/shade planes. One problem
that has been mentioned is that the texture is limited to 8 bits per color
channel but I seem to recall that later GPUs support floating point
textures. Is that correct and if so, where can I find some more
information regarding this?
This is handled in D3D by the surface format of the texture. There
are deeper pixel formats -- whats supported by your card can be found
by looking at the caps viewer in the SDK.
legalize+ (Richard [Microsoft Direct3D MVP])
2007-07-05 22:15:46 UTC
Permalink
[Please do not mail me a copy of your followup]
Post by Ola Theander
The SIGGRAPH papers (Zwicker et. al.) are generally very thorough on the
math and I have a pretty good grasp on the algorithms but when it comes to
actual implementation they are pretty sparse. It's mostly the general idea
about how to implement that's presented and very few details.
If you post which paper you're looking at, I can give it a look-see
and might be able to make a suggestion.
Post by Ola Theander
One thing that you perhaps can help me understand is that, as far as I
know, in order to engage the pixel shader on a set of screen
space pixels they must be "covered" by the interior of at least one
rendered primitive (e.g. a triangle), otherwise they are never processed
by the pixel shader. Of course it's possible to calculate the 3D position
and size of the quads (representing surfels) so the union of their
rendered representations cover every pixel of the model so that it's hole
free and is completely colorized and shaded but it's a little bit awkward.
This is an open question to me but it seems to me it would be more
suitable to blend the pixels "missed" by the projection of quads in the
pxiel shader e.g. by interpolating the properties of the surrounding
pixels/ellipses. I don't know if this is possible so I'm open for
suggestions.
Whenever you think in terms of "I need to perform some processing on a
a set of pixels that are not covered by this bag of primitives", you
might want to consider an algorithm that uses the stencil buffer. For
instance, you can set the stencil buffer so that you draw all your
splats and then use the stencil test to select pixels in the render
target that were not written by any pixel in the splats. If you use
the previous render target's color buffer as a texture input to the
next pass, you might be able to come up with a simple interpolation
scheme to fill in the pixels selected by the stencil buffer by drawing
a big quad that covers the entire render target. People do
full-screen blur this way, they just leave out the stencil test as
they want the blur to apply to all the pixels on the screen.
--
"The Direct3D Graphics Pipeline" -- DirectX 9 draft available for download
<http://www.xmission.com/~legalize/book/download/index.html>

Legalize Adulthood! <http://blogs.xmission.com/legalize/>
Ola Theander
2007-07-05 23:04:39 UTC
Permalink
Hi Richard

I start with your last comment regarding the stencil buffer, I assume that
it's what's used in the DirectX SDK sample "PostProcessing" where the
apply different filters to an image. I'm actually looking at this
technique for the pixel processing.

The papers I'm looking at are mainly, sorted roughly in chronological
order with the oldest paper first, i.e. you might like to look at the
newer (last) ones first:

- Surface Splatting (M. Zwicker et. al), this is more or less the paper
that introduced the surface splatting technique using EWA (Eliptical
Weighted Averaging) filtering. This is a purely CPU based implementation
working in screen space creating a continuous screen space function g(x)
colorizing each pixel.

- High-Quality Point-Based Rendering on Modern GPUs (M. Botsch et. al.)
one of the first presenting an approach that utilizes the GPU.

- Object Space EWA Surface Splatting - A Hardware Accelerated Approach to
High Quality Point Rendering (L. Ren, H. Pfister, M. Zwicker), this is the
paper that describes an approach that is more or less completely
implemented on the GPU and which inspired my implementation the most.
It uses Quads that are texture mapped with a Gaussian for filtering and
the quads are stretched and rotated to represent the surfel in 3D. My
consideration is that this method will not generate a hole free image.

- High-Quality Surface Splatting on Today's GPUs (M. Botsch et. al.), this
paper is pretty new, from 2005, and this is were they recommend to
represent the surfels as points with attributes rather than quads. They
are very focused on using the pixel shader for the rasterization of the
splats. The advantage is, according to the paper, that the CPU part is
much easier if it just have to manage a set of points, i.e. a PointList in
Direct3D, rather than quads. Basically they let the vertex shader produce
a very rough d * d image space square big enough to include the projected
ellipse. The square is then refined pixel by pixel in the pixel shader
which determines if a pixel in the square lies inside the projected
elipse or not, keeping pixels inside the elipse. This sounds very
promising but this is where I'm lost right now. I don't realize how to
have the vertex shader produce a d * d image square from a single point
vertex in a way so that each square pixel is later processed by the pixel
shader. I assume this requires a geometry shader but considering that this
paper is published in 2005 I don't know if the GPUs at that time was
capable of doing this.

I'm very grateful for your assistance. If you can't download those papers
I can send them to you by e-mail.

Kind regard, Ola

On Thu, 05 Jul 2007 15:15:46 -0700, Richard [Microsoft Direct3D MVP]
Post by legalize+ (Richard [Microsoft Direct3D MVP])
[Please do not mail me a copy of your followup]
Post by Ola Theander
The SIGGRAPH papers (Zwicker et. al.) are generally very thorough on the
math and I have a pretty good grasp on the algorithms but when it comes to
actual implementation they are pretty sparse. It's mostly the general idea
about how to implement that's presented and very few details.
If you post which paper you're looking at, I can give it a look-see
and might be able to make a suggestion.
Post by Ola Theander
One thing that you perhaps can help me understand is that, as far as I
know, in order to engage the pixel shader on a set of screen
space pixels they must be "covered" by the interior of at least one
rendered primitive (e.g. a triangle), otherwise they are never processed
by the pixel shader. Of course it's possible to calculate the 3D position
and size of the quads (representing surfels) so the union of their
rendered representations cover every pixel of the model so that it's hole
free and is completely colorized and shaded but it's a little bit awkward.
This is an open question to me but it seems to me it would be more
suitable to blend the pixels "missed" by the projection of quads in the
pxiel shader e.g. by interpolating the properties of the surrounding
pixels/ellipses. I don't know if this is possible so I'm open for
suggestions.
Whenever you think in terms of "I need to perform some processing on a
a set of pixels that are not covered by this bag of primitives", you
might want to consider an algorithm that uses the stencil buffer. For
instance, you can set the stencil buffer so that you draw all your
splats and then use the stencil test to select pixels in the render
target that were not written by any pixel in the splats. If you use
the previous render target's color buffer as a texture input to the
next pass, you might be able to come up with a simple interpolation
scheme to fill in the pixels selected by the stencil buffer by drawing
a big quad that covers the entire render target. People do
full-screen blur this way, they just leave out the stencil test as
they want the blur to apply to all the pixels on the screen.
Bryan Crotaz
2007-07-16 12:35:50 UTC
Permalink
I don't realize how to
Post by Ola Theander
have the vertex shader produce a d * d image square from a single point
vertex in a way so that each square pixel is later processed by the pixel
shader. I assume this requires a geometry shader but considering that this
paper is published in 2005 I don't know if the GPUs at that time was
capable of doing this.
One thing I'd considered to do this type of thing was to pass the vertices
in a stream, and a set of 4 vectors for each vertex in another stream. Each
vector is half the length of the diagonal of your quad and each points to a
corner of the quad. Ergo each set of 4 vectors is identical and doesn't
need to be updated. Then in the VS you shift the vertex by each of the four
vectors, and your index buffer lists each of the 4 versions. There's a way
in streams to set the stride of each vertex so that one vertex maps to all
four vectors, but I can't remember what it's called.

This solution means that you just have to update the vertex stream each
frame, and you don't have to worry about DX10!

Bryan

Loading...