Texture Mapping & Polygon Rasterizing Tutorial (1/2) [C++20]

Texture Mapping & Polygon Rasterizing Tutorial (1/2) [C++20]

Hello. [In Japanese] My name is Bisqwit, and today I am going to show you
an intuitive method – of creating a software 3D renderer
that can be used for simple games. Now before we begin, I do realize that your GPU
probably has immense power – that is specifically
designed for this job, and it makes no sense
to not use it, but many people, myself included,
are interested in how this all works, so let’s go through it all manually. There are many ways
to create 3D graphics. You could create it
from tiny spheres. Let’s call them atoms. That’s what the real world does,
so it’s the best solution right? There’s just one problem:
you need gazillions of them. Many orders of magnitude more
than is possible to handle – on computer hardware,
still for a very long time. Instead of spheres, you could do data points,
and interpolate between them. This technique is called “voxels”, and has a long history
in computer graphics, used by games such as Comanche. However, you need to restrict
your world to certain shapes – such as a flat land
or very large cubes, or you have just renamed
the gazillions of atoms problem. That isn’t to say that voxel
graphics cannot look pretty. This footage is from an upcoming
voxel graphics game called Cloudpunk. Raytracing is another approach. This creates very
very realistic graphics, and is bare doable
with modern hardware. It involves calculating,
for each pixel, the possible
intersections and bounces – between light sources and
the ray that represents —. the direction where
light is collected for that pixel. It is still not the
dominant method for graphics. What nearly every game and application
actually uses is polygons. Polygons are flat infinitely
thin textured surfaces, which are often very small – and can be combined to create
the appearance of complex objects. Polygons do not have substance. Walls made of polygons
do not have material. They are just paper
thin structures in vacuum. People made of polygons
don’t have material. They are just as hollow
as the walls are. In fact, it is wrong to say
they are paper thin. They are infinitely thin. Just like a circle does
not exist in the real world, and the closest you can have
is a cylinder or a torus, polygons do not exist in real world;
they are purely mathematical. Nonetheless, just like dabs
of paint on a canvas, polygons are very useful for
all sorts of visual presentations. But you need thousands
or millions of them. For this reason, it is important that
they are as simple as possible. Follow the UNIX principle: Do one thing,
and do it really well. Or in Bruce Lee’s words: fear not the man who has
practiced 10000 kicks once; fear the man who has
practiced one kick 10000 times. One way to reduce the complexity is – reducing the number of
corners in the polygon. The simplest possible polygon
is the triangle. It has just three corners. If you go any smaller,
it’s no longer a surface. A line has no surface. Triangles are simplest polygons. If you have four
or more corners, you may need to worry about
the polygon being non-convex, or possibly self-intersecting. These concepts are simply
impossible with the triangle. When you render triangles, you do not need to worry
about these special cases, because it is impossible
for them to occur. And if you do need
more complex polygons, you can always build them
from triangles. The process of drawing a polygon
is called rasterization. Raster is simply a fancy name
for a grid like this. Like a squared paper. The computer screen
is also a raster. Imagine that each of these little squares
represents a pixel on the screen. So what do you do with
a triangle, like this? You have to color those pixels
that fall inside the triangle, and ignore the rest. But how do you do that? You could approach it
mathematically. Calculate the bounding box
of the triangle, and for every pixel
inside that bounding box, test whether it is on the same side
of every edge of the polygon. If every pixel is on the same side, then the pixel is inside the polygon
and you may plot it. But how do you plot it? You may use the plane
equation of the triangle, together with the
display geometry, to convert the 2D coordinates
into a 3D coordinate, and then reverse the view
rotation and translation – in order to get a
world 3D coordinate, and use that 3D coordinate
as an index to some texture, and fetch the pixel from
the texture at that point – and plot it on the screen. You could do all this,
and it would work. While this would be extremely
easy to parallelize, it would be very inefficient. So many mathematical operations
for each pixel. I will introduce a very simple and intuitive
scanline polygon rasterizer. Let’s begin by analyzing
the anatomy of triangles. Measuring the relationship between
X and Y coordinates of each corner, I have counted a total of 28
different triangle shapes. In five of them, there is
a horizontal line at the top, and in five of them, there is
a horizontal line at the bottom. In the other fourteen,
there is a bend somewhere in the middle. In seven of those,
the bend is on the left side, and in the other seven,
the bend is on the right side. The bend is indicated
in magenta color in this slide. However, the top and bottom cases are simply
special cases of the middle fourteen. The top and bottom cases
still have a bend, but the height of the bend
is simply zero. So we can ignore those, and concentrate on the
remaining fourteen. These have two major categories: Ones where the bend
is on the left, and ones where the bend
is on the right. Within the groups
of seven triangles, the differences are – where the corners are positioned
in relation to the other corners. For example, in this triangle, the bottom corner is on the
left side of the middle corner. In the next, the bottom corner is in the
same column as the middle corner. Here the bottom corner is on the
right side of the middle corner. Here it’s in the same column
as the top corner. Here it’s on the right side. In the next one, the middle corner is on the
same column as the top one, and in the last one, the middle corner is on the
right side of the top corner, but the bend is
still on the left side. Coincidentally the center triangle
is also equilateral – and the next one is
a right-angle triangle, but those aspects are irrelevant. Let’s take this
triangle for example. It has three corner points: A, B and C,
at some particular coordinates. It is comprised of three edges. Let’s examine one of these edges more closely. Edge A—C runs between
Y coordinates 0 and 11, and in that span, it goes from X coordinate 7 to 12. How do we calculate
what the X coordinate should be – for some particular Y coordinate
along the line, such as 6? The answer is interpolation. The interpolation formula
looks like this. You begin with a starting position. This is the X coordinate
at the top of the line. Then you have a distance to cover; this is the difference between
the end and start X coordinates. You multiply this distance
by how far you have gone. In this case, the difference between your Y coordinate
and the top Y coordinate of the line. You divide this product with the distance
you are going to cover. This is the difference between
the end and start Y coordinate. You can use this generic formula
for interpolating anything, for example how much money you have
earned in the middle of the month – if you know your monthly salary. Now let’s fill in 6 for
the Y coordinate in the formula, and see what the result is. 12 − 7 is 5,
6 − 0 is 6, and 11 − 0 is 11. 5 times 6 is 30. 30 divided by 11 is 2.727… And plus 7, that means the X coordinate
at Y=6 should be 9.727. Because pixels only exist
at integer coordinates, you have to either truncate
or to round the number. We get either 9 or 10
depending which method we chose. If we repeat this process for all
Y coordinates between 0 and 11, we get this line for the right edge. Then we repeat the process for the
first line on the left side, and the second line on the left side. We have two lines on the left side,
because the bend is on the left. Now you may be wondering, that second line on the left
has gaps in it. Won’t those gaps cause problems? Do we not need to solve also these
intermediate coordinates somehow? The answer is no, we don’t. We just draw horizontal lines – between each pair of
left edge and right edge. Go from the left X coordinate
to the right X coordinate, and plot all pixels in between. By doing so we have succeeded
plotting the entire triangle. However, if we have multiple triangles
that share the same edge, some pixels are going to be
drawn multiple times. To avoid that, we actually need to
ignore the last pixel – either on the left
or right side, and the last line either
on the top or the bottom. I chose the right and bottom, because it is simpler
to code it that way. The pixels I removed belong to
any possible neighboring polygon. Let’s put this into code. We begin with a function
that does triangle rasterization. This function takes
three edge points, of some type. We don’t care what that type is, just as long as the user
also supplies a method – for extracting the X & Y coordinates
for each of these edges. We collect the three edge points, and sort them in
descending vertical order, so that the first point
is on the top, the second point
in the middle and the last one
at the bottom. If two points are
on the same scanline, put the left-side one first. We don’t care about preserving the
winding direction of the points here. It’s totally fine, if we change clockwise into
counter-clockwise or vice versa. Then, we create two slopes. We will track the left edge
of the triangle, and the right edge of the triangle. One of these edges will run straight
from top to the bottom, and the other edge,
where the bend is, will first run from
top to the middle, and later from middle to bottom. A cross product will determine – whether the bend is on the
left side or the right side. In the main rasterization loop, we first process the triangle
from top to the bend in the middle, and then the section
from bend to the bottom. Both of these loops begin with
calculating the slope of the edge. If the top edge is flat, then y0=y1, and
the first loop will not be run. If the bottom edge is flat, then y1=y2, and
the second loop will not be run. However, generally speaking, the compiler will generate
more efficient code – if a lambda functor is invoked
in just one location. Therefore I convert these
two loops into one loop. And this is the core of
the triangle rasterizer. We will not touch it
again in this video. Now we just need to
define the lambda functions. This DrawPolygon function will
call the rasterizer I just defined, with three lambda function parameters. The first one, GetXY, defines how to get a coordinate pair
from one of these points. In this case, the functor
just returns the point itself. The second functor generates a slope. And the third one draws a
horizontal stripe of pixels, which is, a scanline, between the left edge
and the right edge. Now I am going to
pause for a moment. Remember this linear
interpolation formula? Hold on,
let me make it bigger. There we go. We have here an
additive expression, inside which there is
a multiplicative expression. You know that A × B
is the same as B × A? And A × B ÷ C is
the same as A ÷ C × B? Watch what happens when we switch
places of these two expressions. We get this. Now what good is
this operation for? The thing is, if the start and end
coordinates are constants, we get two constants
and one addition. The starting position
is the starting X coordinate, and the second group
is the number – that is added to the X coordinate
after every scanline. Let me show what I mean. Suppose we have a line that begins
at (20, 30) and ends at (100, 40). Therefore the starting and ending
X coordinates are 20 and 100, and the starting and ending
Y coordinates are 30 and 40. We are trying to solve X coordinates,
so the starting value is 20. By using the formula shown
at the top of the screen, we can calculate the increment
and we get 8 for the result. The X coordinate at line 30 is 20. Now, every time we move one scanline down,
we add eight to the previous result. And the results are shown in
the bottom of the screen in red. So let’s do that in program code now. We take the beginning X coordinate, and the difference between the begin and end
divided by the number of lines. That’s it. When we draw the scanline, we just sample the current
X coordinate on both sides, draw pixels between them
using a for-loop, and after that, add the increment into the X coordinate
on both respective sides. And that’s all. Now to demonstrate the program,
let’s add a main program. This main program will open a 2D window,
create an array of several triangles, and then draw each of those triangles
in a single color. [music] Now which color should we
assign each triangle? Things will be more
visually interesting, if we give each triangle
a different color. We should also make sure — that the triangle blitting works
perfectly without gaps or overlaps. I added some extra colors
and an if-clause – to deal with that situation. And this is the output. Now if there was
a mistake somewhere, such as off-by-one errors, the screen might look like this. When you create your
own polygon rasterizer, I strongly recommend
you make a test like this – to catch this kind of errors. It helped me,
and it will help you. Now that we have proven
that the code works, we can do some refactoring. I will take the slope code and
create its own class from it. Later on, this will help
avoid code duplication, when we add more slopes. [music] So far our triangle corner points
have been arrays with two components: X and Y coordinates. But we can just as easily
add a few more components. Let’s say, a color. A red, green and blue component. Give every corner
a different color. This makes our corners arrays
that have five components: X and Y coordinates and the
red, green and blue values. In the DrawPolygon function, we change the slope data – such that we now track
four slopes on each side. The x coordinate, and the
red, green and blue values. We don’t need to create
a slope for the Y coordinate, because that is handled
by the rasterizer – which simply just goes through
the triangle from top to bottom. After drawing each pixel, we advance all four
slopes on both sides. This is the outcome. Something is wrong though. Look at this triangle
for instance. Its top corner
is ultramarine blue, the left-side color
is chocolate cosmos, and the bottom corner
is aquamarine blue. But why is the right side
of the triangle – also in a reddish shade? Shouldn’t it be a completely smooth
fade between the two blue shades? A closer examination of
the entire picture – reveals that all of the color shades
are entirely horizontal. This is because I only did
vertical interpolation. We also need horizontal interpolation. I will create new slopes
in the scanline function, and set them to
interpolate the props, that is the red,
green and blue color, from the left side
to the right side. These props are incremented
after every pixel – rather than after
every scanline. And now, it renders
the intended way. All of the triangles are smoothly
interpolated between their three corners. Now that the code
has been proven to work, it is time to do
some more refactoring. I don’t want to have to
change the DrawPolygon function – every time I add or delete props, so I will make it generic. Let’s make it templated, so that it accepts
any kind of points. We use std::tuple_size to determine
how many props there are, and change the loops accordingly. As a slight inconvenience, because we have to use
std::get to access elements, and it needs a compile-time
constant for the index, we have change the for-loop into
a slightly different construction. This is a C++ thing rather
than a texture-mapping thing, but let me explain
what this does. Below is a regular
for-loop in C++, and above is how you can
transform it into a compile-time loop. You only need
this sort of stuff, if the code inside your loop requires – that the loop variable
is a compile-time constant. Let’s go through this snippet in detail
to understand what it does. First, this expression at the bottom
creates a structure of a special type. It creates an std::index_sequence, with template parameters that
form an integer sequence – that goes from zero up to one less
than the specified limit. At the top, this is a
lambda function expression. It is an unnamed function. It does not have a name. It only exists until
the next semicolon, after which it is gone
and no longer exists. The bottom part calls
this function. In other words, we are declaring a local function
and immediately calling it. This section defines what kind of
parameters the function takes. It has two parts. The first part defines
template parameters, and the second part defines
the actual variables – that are passed as parameters. This means that the function can
take as a parameter some index sequence. A template parameter pack
is formed from the numbers – comprising the index sequence. The dot-dot-dot means a pack. It means that p is not
just a single number, but it is a list of
numbers under a single name. So, this function takes some
std::index_sequence as a parameter. This parameter is not given a name, because we don’t need to do
anything with the object itself. We are only interested in
the integer sequence – that is used to define the object. And if the Size was
eight for example, the integer sequence
will be 0-1-2-3-4-5-6-7. This entire sequence is denoted
by that pack called “p”. So now we are inside the function, and we have a pack of compile-time
constants called “p”. This comma-dot-dot-dot part here means – that the preceding expression will be
expanded, using the comma operator, over all the values in the
pack used in the expression. Because of the dot-dot-dot, the compiler will treat your code
as if you had written this instead. If you had a plus sign there, the compiler would expand the expression
with the plus operator instead. So what does the comma mean? It means essentially
the same as a semicolon. It means the expressions
will be evaluated in a sequence, and the last value will be
the return value – of the parenthesis expression. You can use this kind of syntax
also to generate function parameters. For example, if you put the dot-dot-dot
inside a function call parameter list, the compiler will generate
a function call – with all the values from the pack. The expression you are expanding
may contain any sort of details, and all those details will be
duplicated for every item in the pack. So back to the actual code now. You can see how I transformed the
first loop into a compile-time loop, but not the second one. This is because the second
one does not require – that “p” is a compile-time constant. Finally, I used std::apply
to call the Plot function. You don’t always need to use
such complicated syntax in C++. The standard library contains many
helpers to make your code simpler – despite the underlying complexity. It worth investing time to learn them. Now that we have changed the function
into a more generic form, we can use tuples instead of arrays
for the points. This change does not really serve
any other purpose – than to make the declaration simpler. The number five is not particularly
important information to the reader, so it is nice if we don’t
have to specify it explicitly. And that concludes
the introduction. In the next episode,
we will move to texture mapping, and also cover bilinear filtering,
perspective projection, and clip planes. If you liked what you saw, and you would like
others to see it too, remember to hit the like button. Or the dislike button,
if you are so inclined. YouTube uses that feedback to decide
whether to promote my content or not, so it is quite important
for the growth of my channel. Have a nice day,
and see you soon again.

98 thoughts on “Texture Mapping & Polygon Rasterizing Tutorial (1/2) [C++20]”

  1. I can’t wait to watch! Thanks for making such great video content. Have you considered making any training videos on Rust as well? It’s a great language, too.

  2. Polygon graphics in software? That'll be interesting to see how it works!
    I've only programmed a software ray tracer before and I'm currently working with making a rasterising rendering engine but with the help of the GPU

  3. I remember making a 3d software rasterizer a good while ago. I found the projection matrix to be the hardest part to figure out. Can't wait to see your solution.

  4. The editor used in this video is Joe, and it is being run in That Terminal. DOSBox was not used, neither was That Editor. The compiler used is GCC 10.0.1, and run in Debian GNU/Linux.
    The source code is now up at https://bisqwit.iki.fi/jkp/polytut/ . You will need GCC 10, but with small changes it can be compiled with as early as GCC 7.
    ERRATA: The compile-time for-loop is not C++11, but C++20. It depends on fold expressions (c++17), std::index_sequence (c++14), and lambda templates (c++20).

  5. Question … would it be efficiant or possible to combine rendering techniques ? Say voxels with raster ect ect ? When a situation requires said technique ect ect.

  6. Oh my god I'm so excited for this video and the next one, your old 3d graphics videos inspired me to code my own 3d renderers from scratch and now I get to learn from my inspiration himself. I'm so happy

  7. when you think you know a bit of c++ comes this guy and teach you 20 new things that completly blow your mind

  8. Just skipping drawing eg. the rightmost pixels of the triangle is an easy way to avoid overlapping pixels of adjacent triangles. However, it doesn't give the nicest result for the right edge of the triangle when there's no adjacent triangle drawn there (eg. this is the visible right edge of an object). There's like an entire line of pixels missing there. In many cases (and with very high resolutions) it might not matter visually at all. In other cases it might (especially with certain textures with thin edge colors).

    The more "correct" way of deciding which edge pixels to draw is to see if the center of the pixel is inside the mathematical triangle or not (which can be done with a dot product). This is slightly more complicated and requires a few more operations (some multiplications and additions), but in principle gives a nicer result.

    Of course in this case there's still the problem of what happens if the pixel is exactly on the edge of the mathematical triangle (which may well happen). In this case we need to decide whether to draw it or not, and change this decision for the other adjacent triangle. In this case we can use your simpler method: If it's on the right or the bottom, we don't draw it, else we do.

    But yeah, with a simpler renderer where it isn't crucial to draw triangles to the very edge, the simpler approach is viable.

    (I do not know what GPUs do, but I have the understanding that they use the better method I described.)

  9. I am trying right now to make a software rasterizer such as yours to try to render some 3d stuff by myself, but i lack of the basic math knowledge to do it so it is taking a very long time. Have you got any tips for me? Also, since all my experiments right now were made with ncurses (wich I used because it is simple to use and i'm comfortable with it), could you suggest me some easy but fast graphics libraries in c++ which i can use for displaying simple stuff (eg: plotting a pixel with a specific color maybe on a window wich is not the console)?

  10. As a beginner Python programmer, i can assure you, that I don't understand 90% of what you are doing. But it looks very interesting, good video!

  11. Do you always code with a style where you have such long lines, or is this just to fit more on same screen on the video?

  12. This brings to mind the early days of me learning to program.

    Because I learnt game development on PC in the late 90's…
    We were still in a position that you'd probably have to write your own game engine.

    And while technically obsolete even in the early 2000's… Software 3d rendering teaches you all manner of things about how 3d graphics work that simply telling your graphics hardware to render polygons would not.

    Once you understand the principles behind this you should be able to do 3d graphics on just about anything.
    Though of course, performance is still relevant.

    I can certainly write a texture mapped 3d engine for Atari 800XL, but no matter how much I optimise it, I can expect that to run at substantially less than 1 fps.

    After all, the mere act of updating 8 kilobytes of memory (regardless of exact graphics mode used, 8 kilobytes is the most a single image on the system can take up) occupies nearly all the CPU time than you have in a single frame. 1.77 mhz 6502, best case scenario (which is functionally useless – except maybe as a screen clearing routine) writes 1 byte in 4 cycles… 442,500 writes/second. 8192 = 54 fps. Since we've got a PAL machine, we're limited to 50 fps… But you can see the issue, perhaps. And it gets worse fast. Let's say you want to use the 320×192 high resolution mode. This is 1 bit per pixel. That sounds like a good thing, right? Well, no. Individual pixel level addressing is needed for most non-trivial graphics routines. That means you not only end up with a masking step, which increases the workload per pixel (probably closer to 12-16 cycles minimum now), but since you're working on individual pixels in a 320×192 image – 61,440 pixels. Which in turn would mean it now takes on the order of 980,000 cycles to update a single image. – that alone reduces us to something approaching 2 fps…

    Now, taking a somewhat more realistic case for such an old, slow system…
    I created a line drawing algorithm for 6502. It's not meant for a system that packs multiple pixels into a byte, so it'd be even slower then. And granted, I made no attempt to optimise it, there may be room for improvement, but basically each line drawn has about 100 cycles of setup, and then about 20 cycles per pixel drawn, give or take.
    From similar calculations I deduced I could draw about 1000 lines per frame at 50 fps.
    That's more than reasonable, so wireframe 3d is doable.
    Ah, but to draw graphics this way we have to clear out the previous frame before drawing a new one. (or really, with double-buffering, we clear out the backbuffer)
    And the naive way to do that is, to write '0' to each byte of memory. Except, as already noted that takes a LOT of time. (no, the 800XL does not have any special functionality to help us here. The graphics chips in the system are pretty clever, but not suited to something like that)
    So, as it turns out the most efficient scheme I could think of for drawing wireframe graphics is to 'undraw' each line again after drawing a frame. (basically draw the exact same set of lines in a background colour)
    That's great, sure. And way better than just clearing the screen memory, but… It does cut the number of lines we can draw in half.
    Now, 500 lines a frame is still pretty good for wireframe 3d…
    Though that ignores the fact we probably need to do 3d setup and run an actual game in here somewhere…

    Anyway… While the performance might not be there, the principle still remains valid – if you can write a software 3d renderer you can do 3d graphics on just about anything.

    mmh. Graphics routines are always interesting.
    Also if you remember one thing, remember Bresenham's line algorithm.
    Almost everything else you could want to do in software based graphics routines starts from that…

    Bisqwit, as always you seem to be considerably more talented than me with coding, but it's always nice to see someone with an understanding of these kinds of concepts.
    The mere fact I know anything about this kind of programming at all always seems to put me in the minority…
    Guess I just had different interests to most. XD

  13. I was looking for a 3D gaming videos on youtube which you just published !
    Put also some points and notes about run time issues and memory allocation issues (need more scientific refers).

  14. Interesting rasterisation function.

    It's not the most typical approach I've seen.

    Generally, to simplify the logic, the more typical approach is to make use of the fact that any of the other cases can be decomposed into a flat-top and flat-bottom triangle.

    Since both the flat-top and flat-bottom cases only require calculating two lines, and do not involve a change in direction, they are simpler to implement.

    The code for splitting a general triangle into a flat-top + flat-bottom pair is also trivial since it's simply dictated by the vertical location of the middle vertex of the triangle.
    Even a completely naive approach to this only requires comparing the Y value of each vertex and seeing which of these lies between the other two.
    (if two are at the same height you have a flat top or bottom triangle)

    Of course, dealing with that direction change in the middle isn't THAT difficult…
    But this still is not the approach I've seen taken very often…

  15. Oh my… Finally someone does something what I wanted with a microcontroller. Is it possible to have 3D bezier patches (not only polygons) which compose 3D objects, and then orthogonal projection to stamp them onto a 2D surface as 2D bezier patches which could then be rasterized scaline-by-scanline like retro game consoles do? The microcontroller in question is Teensy 3.5. Do you think this could work?

  16. Bisquit explained 10 books and my entire course of computer graphics with that intro and I understood better thia video than the other ones I mentioned. Thank you so much! Please take your time for explaining some SDL functions!

  17. I loved the "inefficient way" section of the video haha. I was doing the same thing when I was learning to program a software renderer until I heard about scanline rendering. Then everything was so much easier hehe.

  18. Hello, I found your channel through the first person 3D game in DOS sometime ago. I want to know if you or anyone that read this could help find were to start when it comes C++, could you help me with that? Or anyone that read this.

  19. … I had to learn this to program 3D graphics on the Game Boy Advance! Minecraft and Zelda. I added a raycaster tutorial which is like 2.5D. Very interesting.

  20. Perfect channel for binge watching while off work. Did anyone notice that he is running 48 cores at 4+ghz? This is some crazy c++20 flexing.

  21. 21:07 – "Note: A comma is not permitted here. It is a curious inconsistency, but the actual reason is: What I explained previously was a fold expression, while this is a pack expansion. These are two different features with slightly different rules. C++ is great, but it is not perfect…"

    What's also not perfect is when communities gloss over the strange syntax and kludges in modern languages, hoping not to scare away newcomers. This usually just causes novices to tear their hair out in frustration when they DO encounter such strangeness. I wish I had known at the start of my career that almost all language features are not the "right" way of doing things, but just clever hacks to force compilers to do things they were never designed to do, using a minimum of syntax changes and code rewrites. A forewarning (and apology) every now and then is a good thing.

    Thank goodness I don't do web development anymore. Modern Javascript makes my eyes bleed.

  22. Nice work again!
    You haven't showed off that you change your hardware :-). I see 48 threads on 4.2 GHz. It seems to be AMD 3960X with good cooling (water), doesn't seem? if yes which MB have you chosen?

  23. 19:05 std::integer_sequence and std::index_sequence are only available from C++14 and on-wards. You mean sizeof… for C++11.

  24. This compile-time loop is not C++11 as folding expression is available since C++17. And I'm not sure about this explicit template parameter list in lambda, I think it's C++20 but I'm not sure. I enjoyed this video very much, simple "good job" does not give you justice.

  25. Man, I can't tell you how much I love these kinds of videos. I have tried to make a basic 3D engine just for fun, but could never figure it out, so I love stuff like this. Please make more of them. You have knowledge many of us don't.

  26. 7:50 ish, id just like to point out that this is "linear" interpolation, so you can interpolate anything thats linear with this formula. anything nonlinear requires something else

  27. Oh nice! At 0:52 that's a screenshot of my implementation of Comanche's "voxel space" algorithm: https://github.com/mcsalgado/voxel_space

    I was surprised and flattered to see that in a bisqwit video! 😊

  28. awesome video. one suggestion for a bonus video or something is looking at the quake source code to see how they wrote their their rasterizer that works so fast on an old Pentium cpu. One thing I know for sure that they do is that at certain distance where a triangle would be just a couple of pixels they completely skip the real rasterization code and just plot some random pixels as the result would be pretty much the same

  29. Hi
    Bisqwit, love your channel! Have you tried also some low-level programming language and build something interesting? Eg. channel like Ben Eater is making simple computer based on assembly and machine code

  30. Please do more on videos about this! I really enjoy your videos about computer graphics. You make all those scientific articles about it much easier to understand

  31. I understand that this is intended to simplify/translate the process into laymen, but anyone who can understand your vocabulary doesn't need this video.

  32. What an amusing coincidence. I've just finished texture mapping in my latest software 3D renderer just a couple days ago 🙂 My previous texture mapper only handled Build-engine style geometries (and the ceiling code was a mess), now I can do both, and with full blown (software) fragment, vertex and pixel shaders in 1080p, on one CPU core. Modern CPUs are mind bogglingly fast.

Leave a Reply

Your email address will not be published. Required fields are marked *