Ola Theander
2007-07-05 09:44:49 UTC
Dear subscribers
I'm currently working on a small prototype where the task is to render a
surface on the screen from point samples of a surface. With point samples
I mean that I have a set of data which contains xyz-values, paired with a
normal, from points belonging to a surface. The task is to render a
hole-free surface based on this data.
Working with points has certain advantages e.g. you do not have to worry
about interconnection between samples as you have to if you rendering the
surface as a mesh but instead there are some rendering problems that has
to be solved.
To each point and normal (i.e. vertex) there is also other attributes
attached such as color and radius which means that each vertex can be seen
as a colored disc in 3D-space, commonly called a Surfel. Such a disc is in
the general case rendered as an ellipse on the screen, which could cause
both holes, if the discs are positioned in such a way that they are
rendered to far away from the surrounding ellipses, or artifacts if they
are overlapping and not properly blended.
The mathematical framework is pretty well examined with a lot of theories
how to this is done in the best way. My current problems are mostly
technical and related to Direct3D and HLSL issues when trying to implement
the theory. Therefore I have a few questions that I hope somebody can help
me with:
1) The application feeds my custom vertex shader, implemented in HLSL,
with surfles i.e. a point and a normal and additional attributes. Since
there is no interconnection between surfels, the surfel set is analogous
to the PointList primitive type. This means that there is only a single
point (pixel) forwarded to the pixel shader. My question is, is it
possible to, in the shader/GPU generate a plane or a disc which is fed to
the pixel shader, i.e. not only a single point rather a small plane (with
the side = 2 * the surfel radius) which is oriented and located according
to the surfel position and normal. This plane is then shaded by the pixel
shader managing color etc. Does this perhaps require use of the geometry
shader?
2) When all surfels are rendered I'll basically have a frame which
contains a mish-mash of ellipses. The next step I would like to perform is
to blend these ellipses together to form the illustration of a smooth
continuous surface. This means that pixels that are located between
neighboring ellipses should be colored with a color that is a mix of the
color of the surrounding ellipses, weighted relative to their distance
from the pixel being rendered. This step can be called smoothing and
blending. I suppose this technique requires me to render the first step to
some kind of a buffer, similar to a z-buffer, which is then processed in a
second step that performs the blending and smoothing. Is there any
recommended way to do this, if it's at all possible with current GPUs?
3) I've seen that some uses textures to blend/shade planes. One problem
that has been mentioned is that the texture is limited to 8 bits per color
channel but I seem to recall that later GPUs support floating point
textures. Is that correct and if so, where can I find some more
information regarding this?
Thank you for your help.
Kind regards, Ola Theander
I'm currently working on a small prototype where the task is to render a
surface on the screen from point samples of a surface. With point samples
I mean that I have a set of data which contains xyz-values, paired with a
normal, from points belonging to a surface. The task is to render a
hole-free surface based on this data.
Working with points has certain advantages e.g. you do not have to worry
about interconnection between samples as you have to if you rendering the
surface as a mesh but instead there are some rendering problems that has
to be solved.
To each point and normal (i.e. vertex) there is also other attributes
attached such as color and radius which means that each vertex can be seen
as a colored disc in 3D-space, commonly called a Surfel. Such a disc is in
the general case rendered as an ellipse on the screen, which could cause
both holes, if the discs are positioned in such a way that they are
rendered to far away from the surrounding ellipses, or artifacts if they
are overlapping and not properly blended.
The mathematical framework is pretty well examined with a lot of theories
how to this is done in the best way. My current problems are mostly
technical and related to Direct3D and HLSL issues when trying to implement
the theory. Therefore I have a few questions that I hope somebody can help
me with:
1) The application feeds my custom vertex shader, implemented in HLSL,
with surfles i.e. a point and a normal and additional attributes. Since
there is no interconnection between surfels, the surfel set is analogous
to the PointList primitive type. This means that there is only a single
point (pixel) forwarded to the pixel shader. My question is, is it
possible to, in the shader/GPU generate a plane or a disc which is fed to
the pixel shader, i.e. not only a single point rather a small plane (with
the side = 2 * the surfel radius) which is oriented and located according
to the surfel position and normal. This plane is then shaded by the pixel
shader managing color etc. Does this perhaps require use of the geometry
shader?
2) When all surfels are rendered I'll basically have a frame which
contains a mish-mash of ellipses. The next step I would like to perform is
to blend these ellipses together to form the illustration of a smooth
continuous surface. This means that pixels that are located between
neighboring ellipses should be colored with a color that is a mix of the
color of the surrounding ellipses, weighted relative to their distance
from the pixel being rendered. This step can be called smoothing and
blending. I suppose this technique requires me to render the first step to
some kind of a buffer, similar to a z-buffer, which is then processed in a
second step that performs the blending and smoothing. Is there any
recommended way to do this, if it's at all possible with current GPUs?
3) I've seen that some uses textures to blend/shade planes. One problem
that has been mentioned is that the texture is limited to 8 bits per color
channel but I seem to recall that later GPUs support floating point
textures. Is that correct and if so, where can I find some more
information regarding this?
Thank you for your help.
Kind regards, Ola Theander