What To Know When Creating Next Gen Assets

Posted by Aidy Burrows on April 23rd, 2015 | Comments (66)

 Written by Guilherme Henrique

next gen asset creation


● References

● When to use triangles

● Getting ready for sculpting

● Thoughts on retopology

● What makes a good UV unwrap

● Getting good normals from a bake

● Next gen texturing

● Resources (LOTS!)



Do you wanna make your very own Gears of War?
Well skip this page then and try asking Google as today we’ll be learning the workflow of making a next gen asset!


For a long time I had problems trying to do stuff for games e.g.

1 – The bakes would never go as I expected
2 – I didn’t understand how I should have been setting the normals of my models
3 – How to make a proper UV layout or how I should’ve modeled my low polys.
etc, etc.


When we are used to working on rendered stuff, making art for realtime processing seems like a whole new undiscovered world, and hopefully, for those who still are kinda figuring out how to make all the things work, today I’ll be trying to clear all the mystery!

A few years ago I started to work in a game studio focused on advertising, and with that, my curiosity to finally understand how everything works started to grow, and after that, me and a co-worker friend decided to raise the level and start our own business, then I found myself in a situation that learning how to properly create game content was going to be vital to the success of our work, and with what I’ve learned, now here I am spreading it to you 😉

Hope you find it useful!

The model I’ll be using to explain the main concepts of the workflow is a tombstone I recently did for a game I’m working on, here we’ll be going from modeling to loading the asset in BGE

So take the kids out of the room and lets start!




Yeah! I think that this should be worth a mention: You’ll only make a good model if you have good references, so the first thing I did was to dive into Google search for heart-warming graveyard pictures.
A good tip here is to use Pinterest to search your references, sometimes it can be even better than the Google search engine (if you can imagine that)

Green-Wood Cemetery


I won’t go deep on detail into this since I really can’t stand looking at more tombstone pictures.





You can start this step in many forms, there are people who like to use apps like Zbrush and build the entire model from scratch inside it, others prefer to make a base mesh to sculpt it later, and people like me, who don’t give a damn about anything.

In this particular asset, I did a bit of an unusual workflow that ended working really well, so I think I can call it a “technique”.

As the visual style is still in development I have a certain freedom to experiment. At first my idea was to try to make a clean looking low poly asset, so my modeling was entirely focused on shapes and readability, silhouette plays a big role when working with assets, so try to focus your attention to it whenever is possible!



Guidelines & Tips for low poly modeling:

As long as your model is not going to be animated or deformed you don’t have to worry about having tri’s (get used to hearing “tri’s” (short for triangles of course) a lot! – aint no time for 3 syllables when there’s baking to be done!). Wise use of tri’s (see there it is again!) can save you a lot on the overall polycount.

next gen asset tris

Q: Why are we able to use tri’s on game models?
A: First of all, when the Game Engine loads your model it automatically converts all your polygons into tri’s, since tri’s are the only geometry that a video card understands, so there’s no need to worry about using tri’s.

Q: Then why do people tell me to not use it?
A: It’s not because we can use it that we should do it (just like drugs kids!), sometimes having tri’s can be very beneficial, but tri’s also can lead to shading errors, ruining all your hard work, also organic models do not deform well with triangles, since it breaks the polygonal flow of the topology, that’s why you can’t use tri’s on animatable organic things, and it’s also better to avoid when you’ll be planning to use subsurf (since this will give you strange crap in your mesh).

next gen asset smoothing errors

– First block out your details, then you worry about polycount/topology.

What does this mean?
Let’s take as example this damage on the tomb, to do that, I’ve first blocked the hole using the knife tool (K), then later I use the knife tool again to remove N-Gons. (N-Gons are faces that have more than 4 sides and can cause an error when exporting to certain engines or using certain export formats.)

next gen asset NGon


next gen asset

How to make sure you don’t have any N-Gons left?


  1. Select the face selection mode

  2. Go to: Select > Select Faces by Sides

  3. On the Tool shelf (T), select “Greater than 4”


next gen asset selection

– Use snap to vertices with merge on to optimize your mesh reducing unnecessary edge loops

next gen asset tri count

next gen asset snapping

These tips are aimed towards models that won’t be sculpted, or when you go to retopo your mesh

BUT, “Hey Mr. Guilherme, what if I did my model that way? It’s really awesome, but I really want to sculpt it and don’t want to redo it from scratch.”

 Well, so let’s suppose you found yourself in that situation above, (curiously the same as mine in this model), what hack can you do to work around this?

In Blender we have an awesome modifier called “Remesh”, with it, you can take your messy modeling full of tris & weirdness and convert it to a awesome sculpting friendly mesh!

next gen asset sculpt ready

For meshes that will be sculpted, you just follow the well known & established subdivision modeling pipeline.

Before sending your models to be sculpted, make sure your base mesh has even face topology:

next gen asset even topology

(In softwares like Zbrush, 3D-Coat or Blender, even topology is a thing that’s not necessarily required anymore) -future, folks-
With Blender we have Dynamic Topology, with it we can just sculpt our soul out without a care to mesh resolution, as it will be dynamically adjusting to your needs 😉




This is where things get interesting, you take your boring model and transform it into an awesome organic devil!

My tool of choice is mudbox, I’ve always used it, so it’s a natural choice for me, but the foundations are the same.
In Blender we have Sculpt mode, where we can do this very well too, the tool is only a matter of preference, find what fits best for you (especially in your budget and hardware)

next gen asset sculpting

With models like that, I usually start adding sufficient subdivisions to a level high enough that I can sculpt small details, since in this case I just want to detail it, not use as a base mesh for a megalomaniac sculpt.

After that, I start using some brushes that seem interesting to break a bit of the flatness and add some damage.

Then stencils come in handy, to do the “R.I.P” and the phrase, the best way to make a fast and clean sculpt was to create a stencil in Photoshop to use as a…. stencil?…(!) to carve the details, this way in just a few minutes you can come with some really complex sculpt models. (Try to make stencils of skulls, statues, etc and use in your work! lots of details in a really short amount of time)

 next gen asset sculpt stencil


You can use as well some other stencils, as bricks, rocks, cliffs and so on to add texture to your sculpt, look how a sculpt can be improved with that:

next gen asset stencil applied


You can grab some top notch alphas for your state-of-the-art sculpting on Pixologic Download Center (for free!)




So you have your high poly sculpting, it would be great if you could just throw that in a game engine and force it to run on your best buy laptop, unfortunately, the world is not that blue, (yet), in order to maintain the detail level of your sculpting but with low resources usage, you’ll need to use a low poly model.

next gen asset

Why retopology?

Since you sculpted, your model probably changed a lot from the shape of the base model, if you try to bake a normal map using them, you’re gonna have a bad time.

In order to fix this, we have retopology, where you basically redo all the modeling following the newer shape of the sculpting.

It sounds terrible, and it actually is! 😀

next gen asset retopo

But we have some tools to ease this burden, we can snap our mesh to the sculpt making life a lot easier, and we have some other tricks that will be discussed below:

Decimate: An algorithm that will try to merge surrounding vertices without changing the overall shape.


● Fast

● Works well with organic (not for anim!!!)

● In some cases, can reduce drastically the polycount while maintaining the shape

● Can be very accurate to the silhouette if not used in an extreme way


● Will give you a poor topology

● Tons of triangles (no good for animation-ready models)

● A handmade retopo usually will give you better results

● If used in an extreme way, can cause artifacts in the mesh

● Harder to properly unwrap and texture

● Transforms you into a lazy guy


next gen asset decimate

There are a reasonable amount of softwares that can do that, some better than others. Zbrush, Mudbox, 3D-Coat, Blender, all have their Decimators, so it should be easy to try them out.

A Standard Procedure:

If your base mesh doesn’t differ that much from your sculpting, you can just tweak your base mesh to become the low poly, this is still valid if you did some kind of hard surface modeling and will bake from that, you can just duplicate your high poly and delete some edge loops until you are happy with it.

next gen asset


You can look at the concepts depicted in the Modeling part, the same guidelines there serves here as well here.

Plugins to optimize Retopo Workflow:

A great manner to enhance your productivity (not only in retopology) is the use of addons/scripts to aid in development. Why do you think big studios have large R&D Teams? 😉 In Blender we have the awesome RetopoFlow addon by CGCookie

Always look for new ways to improve your workflow, hopefully you can even save a bit more time to sleep at night!


– If you don’t want to have errors in your bake, it should be good practice to put some bevels on the extreme corners that go from 90 degrees, as the normal rays are cast through the average angle of the surrounding vertices, if you have normals that are facing perpendicular, this will cause gaps in the projection, causing that famous error in your edges 😀

– Sometimes adding a little extra geometry can fix up hundreds of normal errors, don’t be afraid, 20 more polygons hardly will hurt you that much




Yay, the step that everyone hates! Unwrapping means that you will take your model, and cut it like if you were transforming your 3D model into pieces of paper for someone to assemble, like papercraft.
People usually don’t like to unwrap because they’re not sure on how to do it, once learned (you’ll probably stay hating it anyway) but at least you will know what you’re doing!

What makes a good unwrap?
The essential factor on the UV quality is the correct placement of seams and the low distortion on UV Islands, at first this can seem a bit overly complicated, but in fact it isn’t!

next gen asset uv map

From here, we’ll see some tips and good practices to achieve a good uv unwrap/layout:

First step is to find places where the texture will differ sharply from its neighbours, take as example a television, the glass display has nothing to do with the plastic or aluminum case, then on the border between those two should have a seam!

Looking for places where we will have different materials is a good starting point!

next gen asset seams

Another thing is the angle, if your mesh has an edge with an angle with around 90 degrees or more, that could be a good place to put a seam too! But each case is unique, if you would unwrap a box for example, you wouldn’t put a seam in every corner, the less seams you have, the better it will be for you to texture it later, hiding seams can be annoying, so try to find smart places to have them!

In the end, the job of UV unwrap is to have a two-dimensional representation of your mesh, so you can texture it, and the job of the seams is to cut the model in a way to avoid distortion.

With that said, in Blender we have some tricks to ensure that those topics will be covered, let’s take a look:


In Blender we have this cool feature inside the UV Editor, with stretch we can see how much distortion our mesh has, and we can also see where!
To activate Stretch, you just need to open the Properties panel (N) in the UV Editor, and toggle on stretch.

next gen asset uv stretch

Now use it as a guide to see the parts that need more love, how you can tweak the UVs to fix stretching we’ll see in the next item:

Align/Straighten tool:

Instead of selecting the vertices and manually scaling them to align in an axis, we have a tool inside the UV Editor that can do it semi-automatically for us!
Just press (W) in the Editor to open the Weld/Align menu, then you can use Align auto to align the selected vertices in the axis they are!
With straighten you can select a chain of vertices and straighten it in a diagonal for example!

next gen asset align auto

next gen asset align


Pin Vertices: Key tool on the pipeline!

With pin vertices you can freeze the vertices that you already unwrapped, aligned and correctly stretched, that way, you can continue doing re-unwraps and the pinned vertices will not get unwrapped again! Awesome.

Just press (P) with the desired vertices selected, to unpin them, press (alt+P)

 next gen asset uv pinning

 Alright, we talked about the tools, now lets talk about concepts!

In a game asset texture map, you want to use as much space as you can, therefore it is REALLY IMPORTANT to spend extra time figuring out how to extract the maximum area from your UV space, the larger your UV islands are, the higher the resolution will be for that specific area, so it’s a good idea to put the main visible & detailed parts of your mesh into large chunks of the UV.

A thing that I do with my UV’s that I think helps a lot with using your UV space a bit more optimally is to select all the uv’s and scale them up!

Now, find a way to fit all the islands back in! 😉

This is funny and I swear, it’s my favorite part! It’s just like a puzzle.

A technical detail valid to take note: When you unwrap your mesh, the place where you have the seam will have its vertex count duplicated when read by an engine!

If you have ever used an edge split modifier you will know what I mean, with edge split, we use it to have sharp edges, but in the end what it’s doing is just splitting your edges (as the name said, duh), and you end up having 2 vertices at the same place. The logic behind the seam is similar, so it’s widely recommended over the internet to have sharp edges in the UV seams, this can avoid a lot of bake errors, trust me! (Here’s a quick video from Handplane with further info on the subject… )


In Blender to create sharp edges we can follow these steps:

First lets create your final seams. Clear them all with ctrl + E > clear seam with all edges selected.
This is just so your seams correspond to the islands you have in the editor!
From the UV Editor, go to UVs > Seams from Islands.

Now we need to sharpen those seams, go to Edge select mode with a seam selected and use Select Similar (Shift + G) > seam.
And just make these edges sharp by pressing Ctrl + E > Mark Sharp.

NOTE : It’s recommended to do what is outlined in the video however another method that can be followed is this….

To make that actually sharp, now you’ll need to use an Edge Split modifier, just select it, uncheck Edge Angle, and leave Sharp Edges. 😉
Then all the rest should be set to smooth.

next gen asset sharp edges

This is just my way to do it that has proven to work in my workflow, other people can have their own methods, but that is what has given me the best results so far.

So, it seems that you have your model ready to be baked!




And finally the most feared part of a pipeline, bake and hope that everything goes well, Baking is certainly one of the greatest headache magnets up to this date, you can spend a good time just tweaking values and… waiting……….78%……………

It took me a really long time to finally realize how to properly bake things without cheating in Photoshop and painting some “good normals”, so lets take a look at how this can be done:

The first thing you’ll need to know is how to properly export your low poly mesh.

Select your model > Export > Obj

blender obj export settings

Now here comes the secret: If using the Edge Split modifier your mesh will become split along the sharp edges, and this is not what we want. If you’re not using that modifier and using the method laid out in the video above instead then you shouldn’t have any extra steps to take. Otherwise…

…in order to export the mesh with sharp edges but still keep them welded together we can use these settings on the exporting menu!

So just export it with Smoothing Groups and Keep Vertex Order assigned, and dont forget to deactivate the edge split modifier when exporting, otherwise it will separate your mesh!

Now you have the low poly 100% ready to be baked!

Next thing you’ll wanna do before we hit bake is to create a Cage mesh.


Q:”What is that?”
A: From Blender Wiki:
” A cage is a ballooned-out version of the lowpoly mesh created either automatically (by adjusting the ray distance) or manually (by specifying an object to use). When not using a cage the rays will conform to the mesh normals. This produces glitches on the edges, but it’s a preferable method when baking into planes to avoid the need of adding extra loops around the edges.”

So basically a cage is a “fat” version of your model, that will serve as a guide to the rays that the bake will be performing, using a cage can fix around 80% of your bake errors when setup right!

So let’s make a Cage!

Just duplicate your low poly mesh, enter in edit mode, then select everything and hit (Alt + S), this will perform a Shrink/fatten, with that you can “inflate” your mesh based on normals in a manner that simply scaling can’t.

next gen asset bake cage


To get the perfect bake, sometimes you’ll need to create a cage multiple times, using different sizes, until one of them just “fits perfectly”, worth the price!

So all right!

Now everything you need to do is to export everything to the baker of your choice!

For the tombstone, I’ve used xNormal, since for me it’s the faster baker for CPU that I have ever used, it’s very easy and straightforward to use:

  1. Load your high poly

  2. Load your low poly

  3. Add your cage

  4. Bake! 😀


next gen asset xnormal


 Xnormal is a free bake software for Windows developed by Santiago Orgaz, you can download it from here.

You have an infinite amount of maps to choose here, some of them can be very useful for texturing, especially: Ambient Occlusion, Cavity and Curvature.

Ambient Occlusion is the ambient light shading, helpful for using as a base for texturing, you can also invert and/or tweak it and use it for endless things e.g. as texture damage, tear, and the list goes on.

next gen asset ao bake

Ambient Occlusion Map Example

Cavity is a more refined ambient occlusion, as the name suggests, showing up the model’s cavities, you can use it to define your model even further, and accentuate the sculpture, or invert it and use it for a million other things too.

next gen asset knald cavity bake

Example of a Cavity Map baked on Knald

Curvature is great to create highlights on the edges of your model or dirt, and is mostly used to add some edge damage, scratches and so on, as this is a creative process, you can literally use your imagination to use these maps to what your insanity leads to.

next gen asset curvature

Dual color curvature map from xNormal

Note that a Curvature can be a concavity and/or a convexity map, since both mixed together are the proper “curvature map”, that´s why we have two colors on this map, convexity in red channel and concavity in green channel.

With concavity you can use it as a mask in your image editing software of choice to create dust and accumulated dirt, and with convexity you can simulate worn edges for example.

next gen asset convexity bake

Convexity Map extracted from Knald

If you did everything right, you should have a pretty neat asset almost finished right now!

next gen asset normal

In case you’re still getting normal map errors, check out your model, and review the steps back.

If the errors persist, I have some tips:

Explode: This is a very standard technique in the field, sometimes you can be trying to bake a form that is very complex to deal with and your bakes come with errors as you may have some pieces that overlap the rays, so these meshes overlap each other resulting in artifacts projected into the mesh.

To solve that, you can explode your mesh, that basically consists in separating the objects that may be causing that overlap, as the only thing that matters when doing a bake is the textures, you do not need to worry about this, just find the best way to project your normals without intersections, and you should be fine.

(don’t forget to explode your high poly as well, after all, the low and high poly need to be exactly in the same space, right?)

next gen asset explode

Exploding pieces by scaling their center points (“alt + ,” to activate)

Tesselate: What you really need to care about when doing bake is your final textures, so be free to modify your low poly bake mesh in any way you think that should work to make a better bake.
A good idea is to do a simple subdivision on your low poly just to increase the vertex density, that way, the average of your normals will be much more accurate, and can save you in a lot of situations. A great video showcasing this technique, by Peter Kojesta




To texture this tombstone I’ve used dDo, a plugin for Photoshop which in my opinion is one of the most useful applications out there, you can almost get a mind-blowing next gen texture bundle with the press of a single key, amazing!

And if you want to give a try, they are offering their legacy versions for free! check it out!

next gen asset dDo

As in blender, we do not have a physically based realtime renderer yet (but it seems like the future for this year holds great things regarding this matter!), so I’ll show here the final model rendered in Sketchfab, using a PBR pipeline:

Riddley’s Tombstone by Guilherme Henrique on Sketchfab

With Unity and Unreal Engine recently becoming free, we now have a wider range of possibilities on PBR workflows, you have no more excuse to not learn it! (unless your desktop is awful, like mine.)




And here, let’s take a look at how it should look in the BGE:

next gen asset blender

I’m just using a Diffuse (albedo + ambient occlusion), Specular and Normal map here, the lights plays a BIG role on the overall look in games in general, pay double attention to that!

Q: Albedo, what’s that?
A: Albedo is the base input color, just like a Diffuse.
The main difference between them is the lack of directional light or ambient occlusion in albedo, as is designed to work in a physically based environment, the directional light would look incorrect in certain light conditions, and the AO will be added later on a separated slot.

Technically and scientifically speaking: Albedo is the color of something with no influence of any light, or as if the object would be seen if it had perfect 100% white light shining through every atom of it. That’s why if you look at the albedo texture alone, it’s usually kinda flat.

Now, just add some meshes, some lights & magic and go make your own game!



Hope I could clarify some doubts about the general asset pipeline, and hopefully helped in our long quest for the perfect bake!

If you want to take a look on the other models I’ll be doing for the game and know a bit more about the game we are making, don’t forget to subscribe in our dev blog!


If you find this article useful, spread the word, share!
And keep extruding, Cheers!

Written by Guilherme Henrique


Some (mostly free) golden resources:


Free PBR Game Engines:



PBR Education:


Bakers and Texturing stuff:





Mesh Decimators:



CGCookie’s ReTopoFlow

Sculpting apps:


http://www.blender.org/ (well, who knows :P)

Image Editing & misc:




Brushes, stencils, etc:






Environment Maps for IBL (EXR):










References & Inspiration:





Great learning resources for game art:




Sites to obviously put at bookmark:





Written by Guilherme Henrique



  • http://www.cubelabmedia.com/ Tadd Mencer

    This is probably the BEST tutorial/walk through I’ve ever read on this subject. So well done, Aidy. Thank you! So many resources and hard work was put into this! Bookmarking this awesomeness for later!

    • http://www.cgmasters.net/ Aidy Burrows

      Thanks Tadd! Most of the effort was done by Guilherme so the credit must go to him! 😀

      We’ll be doing a full resources page soon, and many more tutorials that will touch on this subject as it’s a bit of a minefield at the best of times!

      Very glad you liked it! 🙂 Aidy.

      • http://www.cubelabmedia.com/ Tadd Mencer

        Great! I’ve been following this site for a long while now, I’m excited to continue and learn more!

        • http://Meltinglogic.com/ Guilherme Henrique

          Hey, glad you liked it, Tadd!

          We have a bunch of cool stuff on the track, both Aidy and Me are working on some game stuff lately, so you can imagine what’s the stuff we’re digging right now 😉


        • https://www.linkedin.com/profile/view?id=146146370&trk=nav_responsive_tab_profile Patrick Depoix

          Ho Amazing News! Congratulations and I agree with Tadd: A BEST tutorial I love!

        • http://Meltinglogic.com/ Guilherme Henrique

          Hey, thank you!
          I’m glad you liked it!
          Stay close for more! 😉

  • SaphireS

    Awesome compilation of all the important stuff! Now I can just refer people here instead of explaining for hours 😉

    • http://www.cgmasters.net/ Aidy Burrows

      Glad you liked it! Yeah I think for Guilherme the idea of this tutorial started kind of small and then eventually grew practically into a small book! haha! 🙂 Aidy.

      • SaphireS

        Oh I know that pain, explaining something the right way almost always leads to nearly writing a book hehe

        • http://Meltinglogic.com/ Guilherme Henrique

          Hey, glad you liked it!

          And yea “hmm, I’ll write about normal map, Wait, for this I’ll need to write about bake. Ok, so I’ll write about baWAIT for this I’ll need to write about UV’s, Ok, so I’ll wri…”ETERNAL LOOP

          Thanks for spread the word! 😉

  • Mikiiki

    You should always triangulate before you bake. Otherwise, if your engine and your renderer triangulate the mesh differently when you import it, you’ll get an x-shaped shading error. Also I prefer FBX export to xNormal for the low-poly mesh as it can handle custom vertex normal data and can handle hard edges without the edge split modifier.

    I also recommend to set keyframes for your exploded and unexploded versions of your mesh so that you can easily go between them.

    I know you just added the bricks to demonstrate Mudbox stencils, but if I were making this asset I would probably not put worn, broken bricks in a tombstone. Real tombstones are mostly solid pieces of rock. You’d be well within your right to add rain leaks down the side due to pollution and the stone’s water solubility and moss/lichen, though.

    This is a pretty good overview but there are a few points that are just not as good as they could be.

    • http://Meltinglogic.com/ Guilherme Henrique

      Hey, great points you listed here!
      triangulation is of course a good practice, however I never had any problems with that in Blender, in Maya btw I would export it in FBX as their exporter can do the job of syncing the triangulated mesh for you and as Maya draws the triangles different from xNormal.

      And yeah, the idea behind exploding meshes for bake is just that 😉

      You’re totally right about the bricks, we even had a discussion in the studio about that too, but the mesh the way it is was just for the tutorial itself, as we’ll remove the letters and other stuff that would break the “tilling” too and etc
      Thanks for your comment, good additions as well, cheers!

  • James Sky

    Absolutely wonderful resource, it’s making my ‘Gold’ favorites folder for sure. I’ve been blown away by the quality posts on cgmaster in the last few months, but this takes the cake for informative, succinct, and concentrated awesomeness. And the resources at the end? The cherry on top. Thanks, Guilherme and everybody else who contributed to this wonderful post!

    • http://Meltinglogic.com/ Guilherme Henrique

      Heyy James, I’m glad you’ve liked it!

      Its a pleasure to be in a golden folder btw! hahahah
      Thanks for the kind words, you’re welcome! 😉

    • http://www.cgmasters.net/ Aidy Burrows

      As Guilherme has already pointed out that’s a great honor! Thanks for letting us know and keeping us encouraged! 😀 Aidy.

  • http://www.davidboura.com/ David Boura

    Hey, would you be interested in the French translation (as i do with the UE4 Archviz by Collider Guide)?

    • http://Meltinglogic.com/ Guilherme Henrique

      Whoa, that would be cool!
      for me its fine, but you’ll need to see this with the other guys from CGMasters, as I did the article for them and they owns the rights and etc etc, but Aidy should come here soon 😉

      • http://www.davidboura.com/ David Boura

        okey, let’s wait or i’ll contact him!

        • http://www.cgmasters.net/ Aidy Burrows

          David that is a very kind offer and most awesome indeed!! Thanks so much! 🙂 Aidy.

  • Olson

    Manifold doesn’t mean what you think it means – manifold means every edge is connected to exactly 2 faces, non-manifold refers to edges that are not – ie Holes, or other topological mistakes.

    An N-gon can be perfectly manifold, that’s why the select non-manifold option (alt + ctrl + shift + M) doesn’t pick them up. What you’re looking for is n-gons by selecting faces with more than 4 edges. Nothing to do with manifold or non-manifold at all.

    Other than that, it’s good advice for anyone looking to create assets.

    • http://Meltinglogic.com/ Guilherme Henrique

      Hey, thanks for the heads up!
      in fact, in the rush to have everything finished up in time I ended up changing the terms and didn’t even realize, shame on me!
      Hopefully we have ppl like you to look at these issues, I´ll be sending a correction to update the article soon, thank you!

    • http://www.cgmasters.net/ Aidy Burrows

      Thanks for spotting this Olson! This is something that was on the list for me to sort but somehow slipped through the net, thankfully you caught it – It should now be edited and sorted! 🙂 Aidy.

      • Olson

        Cheers Aidy 🙂

        @SleepArtist:disqus Sterling work, thanks so much, I will be recommending this.

        Much love. O

  • http://www.yoloisphyisics.com Albert Ortiz

    Lot of thanks for these tips!

    • http://www.cgmasters.net/ Aidy Burrows

      Thanks Albert! Much appreciated. 🙂 Aidy.

  • Uncle Snail

    Thank you for all these tips. A great tutorial! One thing that I am wondering, is why we don’t just bake it in Blender. Couldn’t we just set up the model, click bake (set to combined or whichever you wanted) in cycles, and then move straight to the BGE? That seems like it would save a lot of time. Also, do you know how to use a cage with baking in Blender?
    I am new to Blender Game, (and Blender internal). I have worked a bit in cycles though, and know some programming language.
    With that said, here is a question I was wanting to ask for a while, but never got around to…
    Which game engine would you recommend? Obviously you use Blender, but should I be using Unity? Which holds a better capability. I have been hearing rumors that Unity is better, but I already have and know Blender.
    Then again, I am hearing other rumors that Unity was better a year ago, but now Blender is surpassing it.
    On top of that, should I just scrap both, and go with Unreal or that other one that is free for nonprofit (Cryengine or something I think…)
    I was just wondering if you could shed some light on the subject.
    Thank you very much, Uncle Snail

    • http://www.cgmasters.net/ Aidy Burrows


      One reason that this isn’t baked in blender in this example is because of the additional maps that are created such as the curvature in xNormal and the cavity map in Knald.

      Though it is worth noting that you can get a reasonable concavity and convexity (i.e. the curvature combo) from the pointiness attribute within cycles.

      However, Guilherme also references that for a slower pc xNormal gives him much faster bake times too.

      Baking a normal map with a cage within cycles is actually quite straightforward. Hopefully you’ll get everything you need from this video from the developer…https://www.youtube.com/watch?v=jOMnNb82DRI

      A huge amount of things can be done with the BGE. My thought process for choosing a game engine starts at the end, so what platform do you want to release on?

      If on pc then the BGE could do a lot of what you need, but if you’d like to maybe release on Android or iOS then maybe ue4 or unity are better options.

      Personally for me I have chosen UE4 to release on for my project…http://www.cgmasters.net/free-tutorials/gamedev-1-intro/

      I’ll do a larger post on this but the short end of it is i’d like to do as little coding as possible…so that removes Unity from the list unless I want to pay for some additional addon resources. A huge amount can be done with the logic bricks and ue4’s blueprint editor. I prefer that way of visualizing the games directions as it were.

      The next large consideration is I’d like to release on PC and on Android. That basically then removes BGE. Ergo UE4 is the result.

      It’s not just as basic as that to be honest though, it’s a lot to do with familiarity with UE4 from studying back on it’s predecessors UT2004 and to a lesser extent the UDK.

      I’ve been really impressed with the potential of UE4 from an arch viz or vfx point of view too so it’s something i’d like to explore there too.

      Also I feel more confident with UE4 to deliver a powerful 3d first person styled game, whereas if I was just releasing on android a 2d styled game I might be more inclined to go the Unity route and dive more into the code side as a trade off. So for different projects it depends.

      That’s just my opinion on the subject of course! 🙂 Aidy.

      • Uncle Snail

        Thanks a lot! (I haven’t been on for a couple of days so I didn’t answer right away.)
        That is very helpful. I will probably download Unreal (as I have never used it yet) and see what I can do.
        About that, how easy is it to get the models working properly from Blender to UE4.
        It seems like a simple import may cause some issues, and a lot of extra work making it look like it did in Blender, in UE4. Also, it seems like there may be slight technical glitches that you have to work around, but I don’t know…
        Thanks for all the advise! (I can’t watch the videos now, because of internet, but I should get around to it withing the week.)

        • http://www.cgmasters.net/ Aidy Burrows

          I will begin a proper introduction to all this very soon! So stay tuned for that! 🙂 Though in short for a game yeah you’d have to begin again on the coding side, or the blueprint scripting side, it’s pretty fun seeing that stuff come together though. For some interesting rapid fire examples from someone else check this stuff out by Tesla Dev….


          And many others on that channel. 🙂

          For the art side you need to think in individual assets and build it up there, there’s also a few other things to note which i’ll go into more detail about with the upcoming info! 🙂


        • Uncle Snail

          Thanks. I have (another) question.
          I have been looking up and watching a lot on UE4, and it looks pretty awesome. It also looks like they have some good methods for making materials. (I haven’t seen much about modeling yet though…)
          So… do you think it would be better to make the assets in Blender or in UE4?
          Blender uses node (normal color) based materials, while UE4 uses physically based materials. I’m not sure, but it may be better to work a lot on the materials while in UE4 even if they were perfect in Blender, just so they can be physically based.
          So should I make the models in Blender then the materials in UE4?
          I know I am swamping a lot of questions, but thanks for reading and answering. 🙂

        • http://www.cgmasters.net/ Aidy Burrows


          Build the objects in Blender with a basic material, diffuse (albedo) and normal map. Then create the metal and roughness maps in Gimp or Krita for example (or bake out of cycles or whatever method/program combo you like) but view and setup the final materials in UE4.

          Hope that makes sense! 🙂 Aidy.

        • Uncle Snail

          Thanks, it does make sense. 🙂 Another question… You mentioned (in the article) that X-normal bakes multi-direction (green and red?) normals. I know you can separate an image by color channels in UE4, but how would you use them for the x and y, as you said in the tutorial? Do I just load in the normal and it does it automatically, or is there something I have to do to get even better results using the multi-directional normal that I get from x-normal?
          Thanks for all the help, and I’m waiting for your series on all this. 🙂 Uncle Snail.

        • http://www.cgmasters.net/ Aidy Burrows


          Are you referring to the curvature map by any chance? A normal map is something that you probably wont want to separate the channels of in most cases (flipping the green channel is something you can do automatically on import into ue4 and then things should look correct in both).

          You may be referring to the curvature map? That is something that you’d probably want to keep in photoshop/krita/gimp or whatever you’re using to create your 2D textures.

          You separate out the channels of the image in the 2d program and use it during the 2d texturing process.

          Basically the less calculations the game engine has to make at runtime the better so you want to put all of the edge wear (for example) into the diffuse/albedo texture rather than having ue4 do it.

          Though if it’s not expected to quite run at 60fps or something then you might be able to get away with quite a lot! 🙂 Aidy.

        • Uncle Snail

          Yes, I suppose I was referring to the curvature map. Thanks.
          I was also thinking recently (if you have time), how would I bake a high-poly model of a different shape, onto a low-poly one?
          Here is my example:
          I want to have trees that are very detailed close up, but as you back away, the model changes to just a few planes. That makes sense, but (if all the lighting is not baked in) when it switches, and you do not have the proper normals (to show that the tree should stick out past the plane), the shading would change, and the shadows would mess up.
          So baking normals from a complex mesh to a plane, or any other mesh with reasonably different geometry seems fairly difficult. Also, do you know if normals would be enough to fix these problems? If not, do you know a way?
          Thanks again. I’m really loading you… :/

        • http://Meltinglogic.com/ Guilherme Henrique

          I’ll take off some of the load from Aidy hahahah 😉

          Yea, you’re referring to the curvature map, as X-Normal can bake both the concave and convex information into one map, using different colors to distinguish between them. But we can do that with the normal map as well, as Aidy said however, this isn’t very usual, but sometimes you can use some color channels of the normal map to simulate some effects or improve your textures as well. i.e. Sometimes I used the isolated green channel to fake some crisp foam using a the normal information from a procedural displaced ocean, or to fake some finer level ambient occlusion for texturing, but this is much more related to finding creative solutions for your tasks than a general practice.

          And yea, when you bake something, you’re just “printing” the surface normals from one object to another, so you can actually bake anything you want to a… plane, for example, nothing denies you from this!

          I did some fence assets a while ago where I projected the whole metal fence to a plane, as it will be expected to be seen from a single angle, you should be fine doing it:


          Yeah, its a plane.


          Same thing.

          Regards to your needs, this is somewhat a common practice in some specific scenarios, you need to look for “Level of Detail” (aka LOD), and general baking concepts.

          With the LOD, you can specify to the engine to change the mesh from one thing to another according to distance, this is very engine specific, so you’ll be better finding out the specific guide for your pipeline.

          There are some tutorials about this subject (trees for games) over the net, search about level of detail for trees, you might end up finding some very interesting things!

        • Uncle Snail

          Thanks for the help (on both comments). I have checked out your game dev blog before, but didn’t read much of it. Maybe I will get more into it now. 😛

    • http://Meltinglogic.com/ Guilherme Henrique

      Hey, thanks! I’m glad you’ve liked!

      Well, Aidy already nailed all the questions extremely well, so I’ll be just be complementing some points:

      As Aidy said, I prefer to use xNormal primairly because its a powerhouse of baking, you can literally take a 20 million polygon beast and throw it down there -and actually expect to work-, even on a dual core walmart 50% off sale pc.
      This can’t be done in basically any other bake engine out there, so its a huge plus, yeah.

      Also, you can bake like 10 different maps at once, all using the proper high poly model as a source, instead of baking from a normal map, xNormal should be the swiss army in every game artist toolset, really.

      About the Game Engine, well… I work with Unreal Engine 4, like Aidy does, I think most people nowdays is moving their projects to it, its a huge well made & stabilished game engine, and also free! (…well, at least for the most part), but a good amount of indie devs use Unity as their tool of choice, and if you want to work with advergames you’ll probably better learning how to use it, since most 3rd party advertising studios use it in their pipeline.

      I like BGE to use as a prototyping tool, I really like it for this, but for a final game, I don’t think its viable the time spent on it to get the things right and stable, when you already have very capable engines with many years of investment and development. -however with the right skillset and brain you can do crazy things with it:


      cof cof…

      • Uncle Snail

        Thanks, and nice additions to Aidy’s comment. (should that be Aidie’s?) 😛

        Anyway, I was wondering, you said you start the prototyping in Blender. How much of the actual programming do you do there? Is it only easier because you don’t have to transfer models, or is the initial setup easier to do in BGE?

        Also, how easy is it to switch engines. It seems like when switching UE4 you would have to do a lot of code adaptation and things, or just start from scratch.

        Do you use the same basic code as you used in BGE, or build from the ground up?

        I haven’t gotten UE4 yet, and only have a laptop at the moment, but I should be building a higher end pc soon enough… 😛

        Anyway, thanks for the great advise. 🙂 (I can’t watch the videos now, because of internet, but I should get around to it withing the week.)

        • http://Meltinglogic.com/ Guilherme Henrique

          Hey! Sorry for the waay too long delay in response, I’m really busy with work and planning out a future training dvd, crazy schedule, you know ;;

          Anyway! About the prototyping inside blender and such, what I really like is to use it to block some level designs, and previs how the textures would interact with the environment (i.e. Lighting + Mist).

          As example, I uploaded some screenshots of the prototyping I usually do in Blender, generally to design some modular stuff and see how everything would work together, and also to get some ideas on lighting/mood, who knows!

          Also I wrote a full blog post about this in my dev diary, in case you are interested: http://blog.meltinglogic.com/2015/05/weekly-6-may-17th/

          The weekly posts there are a bit stuck lately because we are with a bunch of freelance workload on the back, so no time to report the updates unfortunately, but you can find a bunch of information on this subject in the past posts 😉

          You’ll find that we don’t prototype almost anything regarding logic inside blender, but it can be a great tool for someone with little programming knowledge, as the logic blocks are very intuitive to work with, and you can validate your ideas quite easily; but hey, the new Unreal Engine have some sweet features for this too, so you can expect that people will just stick with it. Formerly the logic prototyping in tools like blender were more common because the final game would likely be done purely in a low level programming language like C++, and the engine middlewares were heavily based on raw programming, rather than a more visual approach.

          To complement this subject and also answer your question, the game we are working on was initially built from scratch in our own proprietary game engine, it was coded from the ground using Ogre 3D and C#, but last year we’ve decided to migrate everything to Unreal Engine, when they released their new Licensing agreement. (Maintaining your own engine and also support yourself sucks, really)
          We had lots of custom tools to develop terrains using procedural methods, and everything was transferred as well, programming is programming after all, you can always reuse your algorithms, or at least the logic behind it 😉

  • Alex Wang

    At step 7, do we just hand draw the texture in software like photoshop and Ndo?

    • http://Meltinglogic.com/ Guilherme Henrique

      Hey! Sorry for the late response, really busy these days!
      I focused this tutorial more about the mesh creation and general pipeline, so the texturing stage was a bit under covered, sorry about that! However, in this demo, I used dDo for the texturing, in almost a “wizard” fashion, just some clicks applying some presets and voila! It can be kinda lazy approach, but it really saves me a whole lot of time in a production deadline, for non-hero objects. Quixel youtube channel has some great tutorials on this subject, but you could do the texturing in photoshop, the usual way, or in a 3D painting app like mudbox or substance painter/mari also, I like to use softwares like dDo or substance painter because they can automate a lot the texturing process for a PBR pipeline, creating all the maps intuitively, so worth a check!

      • Joseph Brandenburg

        Also, what resolution do you use for the textures and bakes? Do you do a “high detail” texture of like 8K and then make down-scaled versions (4k? 2K?) or do you only make a game-resolution texture? Or does the game engine mip-map everything and scale the detail for you?

        • http://Meltinglogic.com/ Guilherme Henrique

          Whoa, I missed this question, my apologies!

          Ah, generally I just bake everything into the 2-4k range, I don’t think downsampling 8k is that much needed, unless you have some extra time to take the 4x bigger render times.

          Throwing overwhelming big textures straight into the engine hoping for mip mapping isn’t a good idea either, downscaling textures in realtime takes processing power, and also the textures would increase exponentially in size, I think 4k textures works best for a Raw output to work on, from there you can re-scale them for whatever your memory budget is.

          But of course there’s always exceptions to the rule, with the advent of 4k gaming, maybe 8k textures in a near future wouldn’t be that uncommon after all 😛
          (But just not now!)

  • Chris

    Any possible way you could do a tutorial just like this for say 3DS Max or Maya? I use those programs and I do not like the feel and outcome of blender 🙁 I know its free, but sense many people also use industry standard programs it would be awesome to get a tutorial for those as well! Learning a new program is always frustrating and daunting.

    • http://Meltinglogic.com/ Guilherme Henrique

      Hey! I completely understand it, don’t worry about that 😉
      Sometimes I feel like I should use more Maya as well, proficiency in a day-to-day work environment is something that matters a lot, but I’m fortunate enough to work professionally with Blender, so this is no more a big deal for me.
      However, with that in mind, I tried to wrote this guide the most Software agnostic I could, I know that I refer to several blender tools and shortcuts there, but the concepts behind it remains the same for any software!
      When you start to understand the concepts behind what you’re doing, changing softwares become a matter of time, I promise!
      For the most part, the books about CG I bought over time are generally about Maya, Lightwave, Softimage and such, but I learnt the foundation mainly from there, translating the knowledge from it to my software of choice was just a matter of figuring out where the equivalent buttons and functions are, and some minor differences in workflow as well.
      And if in the middle of the journey you have any questions about a particular thing in Max or Maya for example, don’t worry to ask us them, then I’ll do my best to help you as well, or point some software specific tutorial about the subject 😉

  • Alex Wang

    In the beginning of your article, you talked about the N gons and tris. I understand about the bad of N gons, but for tris, why would it be matter since that most of the game engine convert the models into all tris, so even if I make a model that has tris, when I import it into a game engine, the model get converted into tris at all, thus how can tris cause problems or some other issues?
    Also, I already made some models that had tris (they are organic) when I was modeling them in my modeling software, and now I imported them into Unreal engine 4, and assigned materials already. Consider what you said, should I delete them and redo all of my models?

    • http://Meltinglogic.com/ Guilherme Henrique

      Hi! Apologies for my slightly delay, about the triangles, we need to put that into context:

      Triangules are just fine, but in certain stages of the pipeline they’re sometimes isn’t.

      Let’s talk first why they can be bad:
      1 – Sculpting:
      When sculpting a model (without dynamic topology), each time you add a subdivison level, all the model is subdivided to increase the polygon density, so you can add more details. But when there’s a triangle in the mesh, the subdvision algorithm can’t do a clean subdivision, resulting in weird topology, this weird topology can potentially led to smoothing errors, due to the unevenness of it, so in this step, with that workflow, triangles aren’t great.

      2 – Shading:
      Depeding on where the triangle is, it can shows up in the render as a artifact. For example, if you have a triangle in a curve, the triangulated shape will break the shading in that area and get a incorrect shading in the render.

      3 – Animation:
      If a model need some kind of deformation, triangles are usually bad also, depending on where they’re placed, ( in a articulation for example ) it simply cannot deform well, other than quads that follows the edge flow of it’s surrounding faces.

      I recommend to you explore these kind of scenarios, trying to reproduce it, so these kind off errors will be more clear to understand.

      In the other hand, you’re right, the GPU only process triangles, so everything will be converted to them one time or another!
      BUT, this doesnt means that you can simply forget all the modelling conventions and simply start doing things in a careless fashion. What matters most here is the topology, having a good edge flow is something that can really make a huge difference, even when converted to tris, the edge flow is still there, other than just random triangulated shapes.

      With all that said:
      My low poly models almost always have tris, and this ain’t a problem at all, because I won’t need to subdivide or deform it, in case I need any of them, then I should reconsider using all (or mostly) quads only.
      If your model have deformations, it’s a good pratice to have a good edge flow there using quads, however, in some parts that don’t need any deformation, having quads is not that necessary.

      Hope I’ve been clear, sorry for the rushed response, is just that I have a ton of work here 😀
      Anything shout me a question;

      • Alex Wang

        Thank you!

        Your response help me understood a lot of things in modeling. I think I will have to come back to your article later in the future, because I just started learning modeling and I havent get to animation part, so I don’t know anything about deformations.

        You said “My low poly models almost always have tris, and this ain’t a problem at all…” so do you check your shading in the Modeling softweare? If you happen to run into some shading errors will you just figure some ways to get rid of your triangles? If not, do you just leave the tris there?

        • http://Meltinglogic.com/ Guilherme Henrique

          After making hundreds of models over the years you start to know what should work and what shouldn’t, by doing a lot of crap halfway though hahahah

          If you bake something (well done) into a low poly model, even a very bad topology (from the low poly) can be hidden by a good normal map, that’s why we can do a decimation with a bunch of weird triangles and still have something that looks good, so yeah, it’s a good practice to check if your model is smoothing well, depending you could want to add more polys near to a edge or put some extra edge loops to keep the shading uniform. But to be honest, if you start to watch a ton of modeling timelapses (good ones), and understand the topology concepts behind it, this shading issue will not be that problem anymore, if you take a look at my low poly models without a normal map for example, they look a total crap, but when added a normal map, they magically become great with all the shading goodness.

          A example there: http://blog.meltinglogic.com/wp-content/uploads/2015/04/assd.png

          The middle model is the proper retopo geo that I was doing with quads and a proper topology, and the right one is a decimated mesh straight from the high poly. By looking at it, the retopo mesh will look much more cleaner itself when shaded, and you can judge it just by looking at its wireframe, by having a good edge flow, unlike the decimated one with a bunch of random triangles. But, if you bake them with the high poly information, in the end they’ll look almost like the same, the decimated mesh can have some shading artifacts here and there, if you put a strong light source with hard shadows like a sun lamp (some triangles can show up with a hard edge for example, due to the angle the face is, in relation to the light source), but other than that you should be just fine.

          In the end, this is the kind of thing you’ll be learning while modeling and seeing what works, what not and why. So just keep moving and doing stuff! 😉

        • http://Meltinglogic.com/ Guilherme Henrique

          ah, and notice how even in the middle retopo one I added some triangles here and there near the base, as it won’t affect the general silhouette at all, but the cylindrical part of the candle I preferred to have even sized quads, as this can minimize the shading artifacts mentioned above, so it’s just a matter of understanding how a certain topology will react to the lighting, and this you learn by doing models and seeing they shaded, learning what works and what not 😉

    • http://www.hyperbeamgraphics.com Laurie Annis

      Also, in the early modeling stages, being able to select edge loops and faceloops can be critical, especially for symmetry, seams and the like, but triangles have a tendency to interrupt edge loops.

  • Incognito

    Did not understand this part correctly:

    “So in order to export the mesh with sharp edges but still keep them welded together we can use these settings on the exporting menu!

    So just export it with Smoothing Groups and Keep Vertex Order assigned, and dont forget to deactivate the edge split modifier when exporting, otherwise it will separate your mesh!”

    If i am adding split modifier, my smoothing becomes perfect, but the vertex count increasing twice. If deactivate the modifier before export, smoothing returns to previous bad state. What i am doing wrong?

    • http://Meltinglogic.com/ Guilherme Henrique

      Well, first things first:
      When you use split modifier, it’s actually splitting all the selected vertices (that enters in the angle threshold), so they’re actually doubling the vertices there! a hard surface it’s just that, edges splitted, so the normals won’t average between the two faces creating the soft shading illusion.

      Technicalities aside, if you export the mesh with split modifier applied, all the parts that have a hard edge will be separated, so it’s very easy to get shading issues there.
      If you’re baking a normal map from a high poly, I probably would just export the raw low poly model with smooth shading (even looking weird), then the normal map should take care of all the hard edges (yeah, it’s magic), provided that your high poly have all the hard edges working great and you also have a cage when baking.

      Try that one and tell me if that works!

      Blender doesn’t have smooth groups the same way Zbrush or 3D Max does, that’s why I mentioned that export mode, but for baking purposes you should be just fine doing the way I mentioned above 😉

      • jovianghost

        Hi, fantastic tutorial but I also have a bit of a hard time understanding this paragraph due to the way it’s worded.

        So I think what you’re saying is that you keep the Edge Split Modifier active when exporting the low poly for a high poly bake in another programme. e.g xNormal. But you deactivate the Edge Split Modifier when exporting the low poly to be used in a game engine?

        My confusion comes from the first sentence which talks about exporting the mesh with sharp edges and the second sentence which says “don’t forget to deactivate the edge split modifier when exporting”. I can’t see how we’d have sharp edges when exporting if the we didn’t apply the Edge Split Modifier and then turned it off.

        Any help would be greatly appreciated! Thanks.

        • http://Meltinglogic.com/ Guilherme Henrique

          Hi! In short what I tried to say in that part was something in the lines of:

          – Use Edge Split Modifier to check how your sharp edges look like
          – Deactivate it when it’s ready to export (otherwise your mesh will be “physically” split in the sharp edges*)

          (*That’s what the Edge split modifier is all about, indeed.)

          What blender does under the hood is to tag those edges as sharp edges, so when you load the mesh in other programs, the program read those “tags” as well, so it knows those are sharp edges.

          The purpose of the Edge Split is just to show where these sharp edges are, because blender natively won’t show them for you in the viewport.

          Even if you do not see the sharp edges, they’re there! Other programs may read them natively and show it properly, however. (like Max)

          Hope I’ve been clear this time, sorry for the lack of clarity, english it’s not my main language and not my strength, as well 😛


        • jovianghost

          Ah thanks! I totally get it now. I was stuck on that part because a) I thought Edge Split enabled the sharp edges instead of just showing them and b) I was still a bit unsure about how baking normal maps actually worked.

          I’m glad I didn’t understand though because I’ve learned a LOT in the last couple of days trying to figure it all out. Because I got stuck, I now understand how to avoid seams or waviness in the normal map and the best practices for using averaged and explicit normals, or extra geometry. So all is good.

          Also no need to apologise, this tutorial is ridiculously useful and one of the best resources right now for learning this whole process.

        • http://www.cgmasters.net/ Aidy Burrows

          Hi! I’ve also added some extra info here up on another comment, i’ll paste it in here too in case it gets missed above…

          Also, I used to work with Maya all the time creating the Lego series
          of games, and I would switch between Blender and maya often. The Blender
          dev’s eventually added a method for a workflow which is similar to
          working with edges like you would in maya (basically just like you would
          with the edge split modifier but only without the edge split modifier!
          If that makes sense!).

          All you have to do is make sure your object
          is all smooth normals so in other words tab into edit mode on the cube
          and select all the faces and go ctrl F > Shade Smooth, go to the
          properties window find the object data tab and then enable ‘autosmooth’.
          Then select an edge and go Ctrl E > Mark Sharp! TADA! No edge split

          I’ll do a video on it as it seems to trip a few people up.

          The edge split modifier still has it’s uses sometimes though so either workflow is fine. 🙂


        • jovianghost

          You actually just answered a question I needed to look into. I was figuring out the best way to export blender meshes to Substance Painter via .fbx for baking (+texturing), and I stumbled across a comment about using sharp edges and auto smooth working with a .fbx export.

          I didn’t know that I had to smooth the entire mesh first before applying auto smooth (so thanks for that tip!). I just did a quick test of exporting a mesh with sharp edges + auto smooth into Substance Painter and it seems to work fine, with the appropriate edges sharp and smooth.

  • Reil3D

    Thank you for the tutorial, I really enjoyed it. I’m having some troubles with normals on a mesh so I’m gonna try your suggestions for sure! There’s only one thing I’d like to ask: when you talk about exporting the model from Blender, you talk about the sharp edges. I work in Maya so smoothing groups are just hard/soft edges. Is correct to just apply hard edges where required and export the mesh keeping smoothing and normals? Or is there something I didn’t get from that part of the tutorial?
    Thank you again, I’ll put this page in the bookmarks and go see the other tutorials! 🙂

    • http://Meltinglogic.com/ Guilherme Henrique

      Hey, I’m glad you’ve liked it, thanks!

      I just finished to answer a very similar question in the previous comment, you can find it here: http://www.cgmasters.net/free-tutorials/what-to-know-when-creating-next-gen-assets/#comment-2377896771

      In short, if you apply the split modifier your mesh will be actually splitted too, so if you just mark the edges as sharp and export them keeping the smoothing groups, everything should work just fine! (well, at least it should! hahah)

      It’s been a while since I did my last mesh export with hard edges to maya, so forgive me if I’m talking BS, anything I can take a look here back to find out what I was doing in that time;

      Moreover, you can always re-work your smoothing groups and so on in another software again, for simple sharp edges you can recreate them easily inside maya, and even re-merge all the splitted vertices, if you applied the split modifier by accident 😉

      And we’re glad to be in your bookmark, Thanks!

      • Reil3D

        I kinda missed the reply notice, sorry ’bout the late reply. Thank you, I’ve seen the other answer and I think I’ve figured it out.
        Little suggestion for a new tutorial: cage. No matter how I practice, there’s something I can’t grasp. Many tutorials say to push vertex on their normals. On hard surface modeling, I found that that’s not always the case and tweaking the cage can be really frustrating if you can’t predict the outcome.
        Thank you again!

    • http://www.cgmasters.net/ Aidy Burrows

      Also, I used to work with Maya all the time creating the Lego series of games, and I would switch between Blender and maya often. The Blender dev’s eventually added a method for a workflow which is similar to working with edges like you would in maya (basically just like you would with the edge split modifier but only without the edge split modifier! If that makes sense!).

      All you have to do is make sure your object is all smooth normals so in other words tab into edit mode on the cube and select all the faces and go ctrl F > Shade Smooth, go to the properties window find the object data tab and then enable ‘autosmooth’. Then select an edge and go Ctrl E > Mark Sharp! TADA! No edge split modifier.

      I’ll do a video on it as it seems to trip a few people up.

      The edge split modifier still has it’s uses sometimes though so either workflow is fine. 🙂


      • Alex “Reil” Gallucci

        Thank you for the suggestion, I didn’t know about the implementation in Blender!
        It’s good to be able to switch to a new program without losing the usual workflow.

      • Ex_Vorgier

        Hey I was just wondering if you made a video based on this comment yet. Doesn’t mark sharp only have an effect if the edge split modifier is used?

        I’m also a little confused on the part where you say to clear all seams and then mark seams from UV islands. Isn’t that just going to re-apply the seams in the locations you just removed them from making this step redundant?

        • http://www.cgmasters.net/ Aidy Burrows

          Yes I’ve added the video on this now, here it is… http://www.cgmasters.net/free-tutorials/blender-tutorial-hard-soft-edges/

          I’ll add it to the main post here too. 🙂

          Regarding the seams yes I think you’re right. I’m not sure whether at one point in Blender you needed to have this extra step, though at the moment it does seem that it serves no more than definitely making sure! haha. 🙂 Aidy.

        • http://Meltinglogic.com/ Guilherme Henrique

          Hi, apologies for the delay!

          Reading back now, it seems the explanation for that part was a bit oversimplified, sorry about that!

          What I was referring to is: Some actions you possibly do while in the unwrap stage don’t necessarily create seams (smart UV, projections and cutting faces on the uv editor to name a few), so if you wan’t to sharp them for a optimal bake later (or for anything else that you need the seams to be there in the mesh), you can do that, so there’s nothing missing.

          However, many of these steps are optimal, and just a tip case you’re having problems with your bakes, and want to test something to see if they solve your issues.

          Things like checking all those options in the exporting menu aren’t necessary anymore, as well.

          Hope it clears, cheers!