• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
  • Campbell Ritchie
  • Ron McLeod
  • Tim Cooke
  • Liutauras Vilda
  • Jeanne Boyarsky
  • Paul Clapham
  • Rob Spoor
  • Junilu Lacar
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Piet Souris
  • Carey Brown

Software rendering and zbuffering.

Posts: 29
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Dear Fellow Javites,

Hello there! First of all I'd like to say thanks for reading my post. I know that you are busy people and taking the time to read this means a lot to me. Second, I purchased the book 'Developing Games in Java' and I have read through and enjoyed it. It is one of the best books I could find on the market for creating games in Java.

I'm writing this for some assistance with code in the book.

I started awhile back trying to put the pieces together and learn how to create a software renderer of my own. Of course it wasn't as easy as I thought it would be. I started with learning about vectors, projections, and scanline conversion. But my inexperience and lack of knowledge has lead me to a dead end.
The problem I'm facing currently is that I want to create a polygon zbuffered renderer that draws only solid colored polygons, no textures, and no shading. Now when I go to the book for reference on the subject it introduces zbuffering after shading and texture mapping. This extra information has obfuscated me and my attempts to understand how zbuffering works on its own.

I've done a fare share of research and I know(or at least think I do) how to calculate the ZDepth of a given point using the Polygon form Ax + By + Cz + D = 0, or in the book Z = d(UxV dot O) / (UxV dot W). But not for the life of me can I figure out how to apply this into code.

Attached to this link http://pastebin.com/jThNfNYh is a file that has taken the original ZBufferedRenderer class where I lopped off the shading and texture mapping parts in order to better understand how to calculate z depth by boiling it down to its bare essentials. If you have and suggestions how to do this I would greatly appreciate them.

Usually I'll keep working on a problem and try to not bother people with my questions that could be fixed by looking it up or just experimenting till it works. But this has become such a pain in my side that I'm reaching out for some help.

Posts: 1
Android Tomcat Server Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I'm working just now on a similar thing.. I don't have anything telling me how to do the texture mapping exactly or shading or zbuffering. In the book, "Black Art of Java Game Development" it covers calculating if a polygon is ccw or cw.. that is counter clockwize or clockwize. The vertexes have an order when it about to be drawn on the screen corridnate system. That order will be one or the other.  On or the other is in facing you or facing away from you. If its facing you, you draw it.

The texture mapping is called affine tranform. Basically you use some formula to map pixels from the original texture to a stretched shape. And you probably map a triangle to a stretched triangle. The math basically scans horizontally from top of the new triangle downward while sampling the original triangle. So one pixel in the original might be no pixel or multiple pixels on multiple rows in the new triangle. You basically have a ratio for each direction. But you are scanning horizontally so some sort of math has to tell you what pixel you are working with in the original image using the sampling ratios. Multiple triangles make up a square or polygon.

Shading based on light source I have no idea about how to do yet. I would guess it would have to do with figuring a distance from a light source. But angle of the surface might also come into play.

Using matrices for transforms will probably get you a set of coordinates you might call world or perspective view without the perspective. Its just a rotated model view really I'd assume. So that positive z axis is the direction you are looking. Then its a simple matter to determine if a point, or object is in front of or behind another.  Be it a polygon or polyhedron( set of connected polygons) or pixel even.

Having said all this.. I haven't actually coded this yet. Still working on it. Have I over simplified?  I like simplifying.

    Bookmark Topic Watch Topic
  • New Topic