I still don't "Get" it, but once someone told me it was a "modal gui" (as a vim user) it clicked a bit more for me, that helps, but I don't have an artistic bone in my body, and fail to pick up blender once a year. (that said, I have no reason to use it, save maybe toying with 3d printing meshes)
Another good reason to use Blender is video editing. You can edit the videos, and even create 2D graphics to lay over the video.
As the other open source video editing software were so bad, and Blender had a so good reputation, I tried to use it to edit my family videos. But the interface were so weird for a casual user that I quit. This was one of the reasons that I came back to Windows after decades.
There are NO good Open Source video editing software made for Linux, and I'm pretty confident about this.
I'm currently using shotcut, it's buggy and just work, not great. With it, you can crop the video, change color, do some easy compose it etc, but that's almost all.
If you have a good computer, you can try DaVinci Resolve. Though it's not an open source software, but it will give you After Effect+ level experience.
I had some experience with 3ds Max & After Effect, and I tried Blender few days ago (For Editing Video and 3D modeling). What I found is that the UI of Blender is a chaotic torment even for people like me. It even cause some hard time when all I want is just to "close" (Or Shrink) a panel.
I'm not saying Blender is a bad software, I just hoping Blender can have a more organized, focused and solid UI. Then it will be friendlier for greens who want to learn it.
> "There are NO good Open Source video editing software made for Linux"
What about PiTiVi? Is it one of the video editors you've tried?
Yes, I tried. But it was 5 years ago. And the excellent free (as beer) Hit Film Express got me.
OpenShot is pretty good. Good enough that when they finally ported it to OSX I switched to it from iMovie.
I tried it the other day and couldn't find a way to make precise cuts. You can move frame by frame, but I didn't succeed in cutting at the current position. I switched to AviDemux. The video editing I've done in the past has been mostly with ffmpeg, i.e. command line and scripts.
Wouldn't Resolve be closer to Premiere than After Effects? Having barely used resolve (my computer wasn't strong enough to mess with the 4k video I shot, and davinci didn't like the codec it was originally shot in) and with probably 25 hours in Premiere and After Effects, I could totally be wrong, but Premiere and Resolve felt more like editors, while AE was more for post effects and such.
Well, I don't have any experience on Premiere, After Effects is only one I can use for this comparison. So.
For those reading this post and wondering "what is it like to 'close (Or Shrink) a panel' in Blender?":
Here is a sketch of what Blender looks like by default:
| A |
| | B |
| C | |
| | |
| | D |
| E | |
1. Areas are always rectangular
2. Every pixel in the window belongs to exactly one area
3. Every area is at least as tall as the height of a menu bar, and at least as wide as the height of a menu bar.
Because of these rules, some arrangements of areas are impossible. For example:
| A |
| +---+ | B |
| | F | +--------+
| +---+ C | |
| | |
| | D |
| E | |
This is also not possible, because it would violate the same rules:
| A |
| | B |
| C | |
| | |
| | D |
| E | |
| A |
| | B |
| C | |
| | |
| | D |
| E | |
You could also do this:
| A |
| | |
| | B |
| C | |
| | |
+----------------------+ D |
| E | |
Besides changing the size of the currently-existing areas, you can also add new areas by dividing existing ones in two. For example:
| A |
| | | B |
| | +--------+
| C | F | |
| | | |
| | | D |
| E | |
Horizontal divisions are, of course, also possible:
| A |
| | B |
| C | D |
| | |
+----------------------+ F |
| E | |
| A | | A |
| | B | | | |
| +--------+ | | |
| C | | -> | C | |
| | | | | B |
| | D | | | |
+----------------------+ | +----------------------+ +
| E | | | E | |
One more note about joining areas. With the default layout, only area C can be joined to area E:
| A |
| | |
| | |
| C | B |
| | |
| | |
| E | D |
+-------------------------------+ +-------------------------------+ +-------------------------------+
| A | | A | | A |
+----------------------+--------+ +----------------------+--------+ +----------------------+--------+
| | | | | | | | |
| | | | | | | | |
| C | B | -> | | B | or | C | B |
| | | | C | | | | |
| | | | | | | | |
+----------------------+--------+ | +--------+ +----------------------+--------+
| E | D | | | D | | D |
+----------------------+--------+ +----------------------+--------+ +-------------------------------+
Wonderful and thorough explanation, thank you!
I went through and evaluated all of the commonly mentioned open source video editors recently.
The only one that actually seemed usable was kdenlive.
Openshot, Shotcut etc. were all unusable for one reason or another.
I recently tried out Kdenlive and was very impressed.
I'm not sure what software you've tried, but I've found Kdenlive to be pretty good for video editing. It's best if you can use a recent version though, old versions that some LTS distributons picked up were not great.
The issue I ran into with Kdenlive was that I couldn't figure out how to cut within a clip without saving to a separate file.
I don't have much experience with Kdenlive or video editing in general but am looking for an Audacity-like tool that can slice out useless portions of longer videos (and also mute sections of audio). For now I use ffmpeg on the command line, but wrangling timestamps is cumbersome.
When I need to trim part of a clip in Kdenlive, I just use the Cut tool on the timeline and delete the portion I don't need. This doesn't create a separate file, it just tells it to only use a portion of the original file. It's been a while since I've used it though, so there might be an even easier way.
For muting you can apply the Mute audio effect for the required portion of the video.
I think I was expecting cutting to work with a selection (a click + drag to select a range) rather than clicking to cut at the start, then again at the desired end, and finally removing the newly segmented portion.
Appreciate the help as I learn how to properly use this tool!
You can also just drag the edges of the clip in the timeline.
Nice - I forgot about that!
Have a look at avidemux, which can even cut videos without re-encoding.
Avidemux is a great tool, but it would require creating a second file I believe.
My workflow is usually to use Avidemux to do a rough trim of a clip (e.g., to take a five minute clip and trim it down to about a minute with the interesting parts in it) and get it into the format I want, then I use Kdenlive for the finer work.
To use it as a video editor you really have to understand too much about computer graphics in general, and how Blender does things in particular.
I find that lightworks does a great job
Having used blender for video editing well after I became familiar with the tool, I actually found it incredibly intuitive and featureful than most open source alternatives.
Turns out editing video and audio is incredibly similar to 3d animation.
I sure hope I can create my name but on fire with smoke effects, it'll look great at the start of all my youtube videos.
I think you should first learn the hotkeys and shortcuts. That is what helped me a lot. There are a lot of cheatsheets online that you can use.
The GUI is easy to learn, but the keys are the key to blender.
It's all about the hotkeys. I'd say that you have to learn them in order to learn Blender with any sort of efficiency.
Aren't "modes" in UI a recognized bad practice? See: https://en.wikipedia.org/wiki/Mode_(computer_interface)#Mode... or read Jef Raskin writings.
How can a modal interface be "better" than a modeless one?
Modal interfaces are bad practice for software that will be used by people with minimal training, in a "pick up and go" fashion, but they are not necessarily counterproductive for skilled experienced users. Douglas Engelbart was a strong proponent of modal UI's and IIRC did extensive studies showing that although disadvantageous for "newbies", such interfaces could eventually yield higher productivity. I don't know how much of this was simply due to his having such a small and biased sample. It is also important to point out that his conceptualization involved using a 5-key chording keyboard in one hand and a 3-button mouse in the other for most routine operations, so the modes helped extend the number of operations that could be encoded. Anyway, as someone who uses emacs instead of vim, on a regular old keyboard, I really can't say personally.
EDIT: I suppose many people use a program like Blender with one hand on a mouse/trackball and the other on a 3d mouse, so the point about having different modes to get more versatility out of the same few buttons is still relevant. With that approach I use the radial menus plugin for Blender which makes changing modes pretty painless IMHO.
> It is also important to point out that his conceptualization involved using a 5-key chording keyboard in one hand and a 3-button mouse
Don't forget the foot pedals!
Blender is definitely not modal in the Raskin sense. It extensively uses "quasimodes", but so do Raskin's own designs. That or the different view layouts mentioned by another commenter may have caused the confusion.
Modes are essential to complex, specialized software like this. As an audio engineer with vast experience in different DAW applications, take away my modes and take away all semblance of my productivity. :)
I think it's more of a separate workspace layout for different tasks. i.e. one "mode" for modeling another for rigging and yet another for animating.
It's not just a workspace layout, it does have those but they're separate from the modes. Different modes (object mode, edit mode, sculpt mode, texture paint mode, weight paint mode, etc.) all have different hotkeys and in many cases have functionality that aren't available at all in the other modes. There are some conventions shared across modes, like G/R/S hotkeys for move/rotate/scale, or X to delete things. But the majority of blender's operations are mode specific.
If you're in one of the painting modes, F is the hotkey to change brush size. But you can't add faces in the painting modes.
If you're in edit mode, F will create a face from the selected vertices or edges. The concept of a brush doesn't exist here.
If you're in object mode, F doesn't do anything.
For another, I is "Inset Face" in edit mode, but "Insert Keyframe Menu" in object mode. This one is a bit different, because the keyframe menu is actually available in any mode and it just doesn't have a hotkey unless you're in object mode. But like the other mesh editing tools, Inset Face only exists while you're in edit mode.
Screenshots for reference:
Workspace layouts https://i.imgur.com/7g61twc.png
Step 1: Create a bunch of simple primitives that represent a sculpture of your model. Generally speaking: squares and triangles.
Step 2: Add colors and textures to the model. Textures are 2d images mapped to the squares and triangles you made in step 1. With proper care, you can make triangles and squares look "smooth and circular" with proper shading effects.
Step 3: Simplify your model into "bones", which can deform the model with fewer control points. For example, you can create a "neck" bone which moves all of the polygons that represent the head. Bone modeling is itself a very intricate process that takes a lot of practice.
Step 4: Bone models are far easier to animate. Move bones around, instead of polygons. Instead of selecting the "head" each time, you simply select "the neck bone" to move things around.
Step 5: Lather, rinse, repeat for every object in the scene.
Also, great artists steal:
Find a work of Kandinsky or Picasso you like and try to transfer it into blender using the same geometry shapes the artist is using.
Then add your own twists , extra light nodes / special displacement textures / flying haiku texts going across screen at appropriate times hah
Or more seriously, use "MakeHuman" to make a base human model, and then modify it to fit your needs.
There's lots of ways to "steal legitimately". In this case, the human model and bones model are all set.
What do you use Blender for in your job?
Typically I use it to create illustration style renders that I use within pages. I create a lot of complex isometric diagrams for my current job that are interactive. So I create them in blender, chop them up, then load them in with WebGL or just slice them into image layers that allow me to create interactive elements.
Back before mobile interfaces went flat (the skeuomorphic days) people were rendering out tons of assets for interfaces. The pendulum will eventually swing back in that direction...
I'm not him. But as a front-end too my guess can be a few:
* WebGL and 3D Models
* Creation of Assets for web
* Maybe something more abstract like prototypes, flow control, etc
Sick abstract backgrounds on internal wiki pages / out of this world renderings of office memes to keep morale high and people in good spirits
From one Blender user to another; learning curve is not really steep compared to 3DS and Maya. Not noticeably steeper at least.
I don't agree. For starters, the more traditional UI of Maya is easier to grasp than Blender's. That's quite a difference that makes the learning process much easier for Maya.
Yup, both max and maya have such a 'traditional' (for lack of a better term) kind of UI - it's (for the most part) intuitive, whereas blender's just appears to be spread out all around the sides of the screen and it just feels like a mess. Even the basic aspect of interacting with the viewports is weird, why is there a crosshair, why can't I drag things around; what the hell is this weird lasso thing?
Blender is definitely odd and quirky compared to Maya or Max. All of them have a steep learning curve, but Blender is weird on top of being complicated.
Is Blender close enough to a 3D CAD app that I could use it to build my models for 3D printing?
I've used SketchUp a lot, but it has many limitations. Other 3D modelers are either proprietary, or don't seem to provide enough automation to justify moving away from SketchUp.
I have used blender for some 3d printing projects. Shapeways has several articles describing the process and highlighting some of the issues (eg https://www.shapeways.com/tutorials/prepping_blender_files_f...).
However, while I think it's very good for animation/rendering purposes, it is missing a lot of things you might expect in a full CAD app. I use it to build meshes and then describe how to deform it. It has some procedural constraints. I recall it has some snapping behavior, and googling around shows some plugins that try to add some of this missing CAD functionality, but I can't really judge their success.
Maybe see also
Yes. It is a fully capable 3D authoring tool.
Would you use it for your 3D print projects? (assuming you did any 3D printing)
I've personally considered it. If you're more of a visual modeller person, it very well might be ideal.
If you're a programmer, however, consider looking into OpenSCAD or OpenJSCAD:
Not only are they more programmer friendly, but making parametric models should be much easier as a side-effect.
I do use it for 3d printing projects. It's far from ideal, and doing anything precise in blender means having to learn your way around the various snap tools and the like. But it does work, and work nicely once you learn how to.
That said: you should probably learn some other programs too. Personally I use openscad and solvespace too depending on the use case.
How does it compare to Modo?
And also: Why is there "render noise?"
The cycles render is a pseudo-random path tracer, so the less samples are taken the higher the random noise. Increasing the samples and some other tweaking reduces the noise significantly, but also takes much more time. Denoising the rendered image can produce better visual IQ for less render time.
Thanks for that info. Do other renderers use non-pseudo-random methods that don't produce noise? I don't recall ever seeing noise brought as an issue in a 3-D renderer.
compare in what area?
UI polish, speed, rendering quality.
I don't have much experience with 3-D modeling software, so I don't have further specifics.
Yet again I'm amazed by blenders progress. I've been a 'hobby 3d artist' for the past decade and find it incredibly useful in my full time job (front end dev). Blender to me is a shining example of an open source project that seems to constantly improve and give the proprietary apps like 3ds max and maya a run for their money.
While the learning curve for blender can be fairly steep, once you get over that hump, it's an absolute joy to use. A lot of people struggle with the UI, but once you 'get it', it becomes an incredibly fluid and well thought out interface. It still has it's quirks, but as far as 3D packages go, the UI is actually really great.
This update adds lots of things that I have been waiting for. Shadow catchers with cycles has always been possible, but it required a fairly obtuse method to get there. The denoiser for cycles as well in an incredibly nice feature and works incredibly well. The PBR shader is also a real joy to use. Coupled with a decent set of textures, PBR shaders make shading a seriously fun activity. Although it's nothing new (in the industry) it's really nice to have it in blender.
A real asset to any designers toolkit.
Also this: https://www.blenderguru.com/tutorials/blender-beginner-tutor...
Is there something specialized for 3D/CAD design for creating models for 3D printing? Thanks
I like OpenSCAD for printing, since it encourages you to use precise measurements, and makes it fairly simple to create reusable components.
That said, I've mostly used it for "I need to create something that will fit these inside of it", I could imagine your priorities are different if the print is the "driving" component of whatever you're making.
Thanks, I tried Blender but had to do research to learn to use it properly or "get it". On the other hand I used OpenSCAD and being used to code it really fit my mindset, thank you! In a few hours I had what I had in mind in a STL model :) Thanks
What software you use to create something to 3d print will depend on what it is you want to make. There are two types of 3d modeling software, solid modeling and surface modeling. Solid modeling or CAD is used in engineering, when you need accuracy in measurements. Surface modeling is used in video games, for artistic models. In the world of surface modeling blender from what I found is the best FOSS option. I have not been able to find a good equivalent for CAD... You either have to shell out thousands of dollars or you can find some open source solutions but lack the features. If you have the money, I recommend Solid Works for cad.
If anyone knows of some good affordable or FOSS CAD software I'm still looking for my own projects.
I don't know about blender; but many people use autodesk fusion 360 to do CAD/CAM for 3D printing. It is free for personal use, and easy to learn.
It also has the benefit of being able to output G-Code directly, so if you get a desktop CNC or the like it can also control those. Having a fully integrated workflow between CAD and CAM is a big benefit IMO.
I'm not sure about specialized software, but you can just make your model in blender, and export it as a .stl file.
Sketchup works pretty well in my experience.
I just started fiddling around with Blender the other day for fun. There's an official 25-video YouTube list of tutorials that really gets your feet wet nicely, and it can be done on a lazy weekend morning. I loved it and have enjoyed how far you can get with just a few hours of tutorials.
Another way to support the project is by subscribing Blender Cloud. Gives you access to tutorials, etc.
Just a remainder, if you find Blender useful, consider donating - https://www.blender.org/foundation/donation-payment -
I feel Manuel Bastoni Lab creates far more realistic models: http://www.manuelbastioni.com/manuellab.php
Opensoure and directly integrates into Blender as a plug-in.
The similarity between the technologies use to build those two web sites immediately caused me to believe the two projects are very closely related. Can you share any additional details regarding the relationship between the two?
Same main author, one is integrated into Blender and the other is standalone.
Let's give some love to MakeHuman now that we are on this topic. I was astonished to find this gem when I was trying to model characters. It exports human 3D models to blender with rigging etc.
and of course it is open-source and free
Hah, yeah. It made me switch from Google Protocol Buffers to Cap'n Proto because I needed scene export from Blender to my own format.
What does this have to do with anything? And why shouldn't they have? Did it bring any problems?
I think the parent was just trying to say that this is neat - it was much earlier than most other projects did, considering Python 3 was only released in December 08. Kudos! Though I think Blender made the switch with version 2.5, in 2009.
Alright then, I read it as a criticism. My bad, I seem to be traumatised by seeing Python 3 being constantly bashed x)
It was meant as an irony of history, because the rest of the Python scene barely made the switch by now.
Lesser known fact: Blender switched to Python 3 in 2008. I guess they thought the rest of the Python community would leave them behind if they wouldn't ...
3DCoat is a specialized voxel sculpting application, quite unlike Blender - it's a waste to use it for mere texturing!
Once you get the hang of it, voxel modeling (3DCoat's version) is really intuitive and provides certain results much more efficiently than NURBS (non-uniform rational b-splines), SDS (subdivision surfaces) or SCG (solid constructive geometry).
I'm not putting down Blender at all - these two packages greatly complement each other!
Ah, so true. But, given an existing unwrapped Blender model the superb texturing capabilities are astounding. I don't have to think about specular maps, bump maps, diffuse maps etc as distinctive assets anymore. I can just paint with friggin stone, metal and dirt and scratches with data that is transferable to several rendering packages.
Other packages do offer this but the affordable price of 3DCoat make it an exceptional option for hobbyists (IMO).
3D-Coat is a fairly fully featured model creation package. It has very good texturing facilities, not as good as Substance or Modo, but very good.
Actually the single best feature in 3D-Coat is its UV unwrapping and editing functionality.
Try the Substance suite if you want an actually good texturing program. Very affordable licensing, too, with an optional license-to-own pricing model.
Coming from 'old school' texturing pipeline with hand assembled specular, normal etc. maps to get something akin to realism by just painting with the selected material and having the program automatically fill in the various channels feels amazing. I was not really pushing the program but the workflow enabled by it. If substance works as well that's cool to know.
Thanks for the tip! I downloaded the trial and it seems really nice.
Sure thing! After you spend a few days with it, I'd love to hear how you compare it to 3DCoat. I've considered picking up 3dCoat not for its texturing but its voxel workflow.
Substance Painter is also a great option for texturing, costs a lot less and is extremely popular among biggest game developers.
Lot less? Cheapest 3DCoat licence seems to be 99 dollars while the Substance painter costs 145. The non-limited 3DCoat is more expensive, sure.
Wait for the Black Friday sale or a Steam sale to buy Substance Painter. It should be significantly cheaper.
Also, Substance Painter works on Linux which is important for some people.
3dcoat is also available for Linux.
The cheapest 3DCoat license prohibits any commercial use. The less restrictive license costs a lot more.
Finally the ubershader! If anyone visually inclined wants some serious head candy I warmly recommend buying 3DCoat and using that for texturing and Blender for rendering (and modeling) Once you know how to model it's astounding how fast you can turn shit from your imagination into something photorealistic. It's almost like magic nowadays... You need a beefy GPU to thoroughly enjoy it or lots of cores, though, to tune the lighting in real time.
It's sculpting and miniature painting and photographing and it's such fun!
Not that you have to be grumpy, but It's require certain personality to work on single project for years while avoiding complete burnout. I think it's true not just for open source.
Also in the beginning it's extremely important to have someone to keep project together and that is also require some dedication. Though most of projects become more self sustainable once community grow big enough.
I've been impressed by this as well (not just in the blender project). I think having a good leader who can maintain a clear architecture for the project has a huge impact on the consistency of the result. It can be expressed as something like coding style or source tree layout, but also affect which features make it in, and how they affect a user's workflow.
I don't know how they manage it, exactly, but it contrasts quite a bit with other projects I've seen that feel more haphazard, get-any-feature-in-and-we'll-polish-it-up-later. With blender, things feel polished long before we see them in the release.
I contributed to blender for a few years and rarely interacted with Ton, never found him to be grumpy either.
Ton Roosendaal has been the utter backbone of this project since before it was a thing. I wonder if having an opinionated and mildly grumpy leader is mandatory for an Open Source project to survive and thrive as Blender clearly has?
PBR (Physically-based render) Shaders are extremely realistic. The Suzanne monkey example is pretty neat, but you can find mindblowing examples from other artists. 
Those wear marks and fine textures probably are photos, kind of. It turns out one of the ways photorealism is achieved is by mapping textures onto 3d objects, and those textures usually originate from photos.
From heavily edited photos where the original surface has to be decomposed into rough estimates of base color, metallicity, roughness, anisotropy, subsurface scattering, transparency, normal maps, etc etc.
Light reflects/diffuses of metals if different way then from non-metallic objects (insulators). This gives metals characteristic shine you can't replicate without metalic particles in real world (metalic paints). PBR Shaders take this into account and this creates so called "metalness workflow" instead of "specular workflow".
With specular workflow you would specify color of diffused light and color of reflected light (specular map, usually white for insulators and in color of the metal for metals).
In metalness workflow you specify albedo map and metalness map. The metalness map is usually 1 or 0 as materials in real world are either metals or not.
Interesting—I've read the original PBR book but didn't realize that such robust support and culture had grown up around it.
Even the dents on the lens? Texture wouldn't be able to generate actual geometric deformities can it?
Textures can create the illusion that certain small deformities exist. This is often used for scratches and things that are not always modeled.
Scratches like these could sometimes be painted directly onto the texture - or maybe sculpted onto a higher-resolution model and then "baked" down to the lower-resolution model. In this step lots of data from how light bounces off the high-resolution model can be imprinted to a texture and re-used in a lower-resolution model.
If you want to learn more the terms to be googling are "normal mapping" and "bump mapping"
In this article - If i was to guess; all the pictured spheres are perfectly spherical, and just textured differently. https://www.marmoset.co/posts/physically-based-rendering-and...
Also - heres an interactive model of the lens: https://www.artstation.com/artwork/kl0A6
The "Layers View" button in top right gives you a neat view of the model with/without certain texture layers
not talking about bump mapping.
Talking about the dents on the lens' filter silver filter ring. I doubt it was modeled.
That can be done pretty easily with adaptive subdivision and displacement mapping. You could also just create a super high-poly mesh manually and paint in the dents.
Sure. That's called displacement mapping. Fairly ubiquitous in high-end renderers.
Interactive PBR of the metallic monkey: https://sketchfab.com/models/39128f2ba3db4d30be7238a569fa234...
Man, the interactive stuff is getting so good. Never thought I'd see a reasonably photorealistic steampunk suzanne rendered realtime in my browser. (on a three year old low-end chromebook!)
Wow, I didn't realize how good sketchfab's model viewer is. Thanks for sharing this.
Render. The specularity of the surfaces is not quite right yet... with some more tweaking this could sure pass as a photo, though. Some wear marks are odd, too, but that can happen in real-life, too.
Definitely a rendering. But I was taken aback, I have a couple of similar Olympus and they are uncannily similar to that image.
Definitely a render.
The image of that camera under "PBR Shader": Is that a photo or a rendering? Caption and context imply it's a rendering, but it looks too realistic for me to believe, with all those wear marks and fine textures.
For those who are new to Blender, the absolutely coolest part of Blender is that it's fully programmable. Anything you can do through the UI, you can do with Python method calls.
You can create, modify and render scenes from the command line only, no GUI needed.
You can do that here: https://www.blender.org/foundation/donation-payment/
Guys, please support Blender finiancially. This is most awesome open project, so polished!
Wow, their website has improved a lot since I last visited too.
And I like how they make User Interface improvements notable additions to their change log / announcements. Such an important part of software.
Blender is a staple application if you are into 3D Printers nowadays. Keep up the good work, team! :)
The "Application Template" feature is an indication that the developers have caught on to the idea that Blender has become an application framework. I hope they will embrace this more, in the future.
I wrote a couple of command line "apps", which are basically blender running with a particular script, a particular Python Path and a particular blender file, then with some command line arguments.
I think it's to the point now where the method of render isn't as critical -- many are capable of photoreal renders, but subtle details, like the way objects settle, or how rooms that are actually lived in are worn.
Bonus: I thought to compare the sterility of the bathroom pictured to an IKEA catalog, and thus looked one up. I couldn't tell whether the pictures were CGI or photographed, and did a little digging -- apparently 75%~ of IKEA's catalog is CGI.
I've made the observation previously that it's harder to tell the difference between CGI and Fake Actual than between CGI and Real.
The examples I gave previously are movies. The CGI of a bedroom in a movie doesn't look noticeably more fake _because_ all the actual "bedrooms" in movies are sets built by carpenters a few hours before filming. The "actual" bedroom lacks all the details of a real bedroom, so it doesn't surprise us when a CGI one does in a movie.
It's interesting to compare Dogme 95 films, a movie made strictly to those rules is using real spaces not sets - among many other constraints, and so it looks more "real". Of course you can't make Iron Man 2 or The Fast and the Furious this way, but a lot of other stuff _could_ be done under these rules if we wanted it to look like the real world.
Yeah. I think I like the sRGB version of it more though...
It's not about "realism" or even style, it's more about preserving detail in extreme contrast. All pixels brighter than a certain threshold become white and since there's nothing whiter than white, they're all the same with information lost. This is basically HDR (no idea why the "Filmic Color Management" phrase is necessary), as it has been common, even in real-time rendering and digital photography for years. I suppose they implemented it in a smarter way? The basic idea is to expose at different levels and combine the parts that preserve the most detail.
I think it's called Filmic Color Management because of this:
"... imitate an effect that's unique to film: as exposure increases, colors will become more and more desaturated."
Whereas in many HDR algorithms, there is no such thing going on.
For rendered images, you don't need to expose different levels, as a single render done in 32 bit floating point usually contains enough information to cover the visible dynamic range (and beyond). The improvement in Blender 2.79 is that it includes new transformations from the render color space to the display space.
"Filmic" is just a name that has been in use lately to emphasise when such transformations mimic the behaviour of film, as opposed to for example tone mapping operators that mimic the behaviour of the human eye.
I don't know why they chose that scene, these ones show the difference much more: https://www.blenderguru.com/tutorials/secret-ingredient-phot...
The sRGB image might look punchier at first, but once you start looking closer you'll see it's blown out and the colours are wrong. It's like putting 10 spoonfuls of sugar in your coffee.
The real advantage of using HDR or Filmic, is that details start to show in areas with low contrast. Try looking at the window on the right or around the faucets. The higher contrast means more details are visible.
In my opinion the point is that the artist gets now better control over the end result. I think with minor parameter tweaks you could get a pretty high-contrast scene too. Also even the high-contrast render probably will look more realistic now.
Damn, amazing that bathroom render, hard to tell it's not real (photo)
The most important change - HiDPI scaling on Linux for high resolution screens. Hopefully Gimp will follow the lead.
Video editing, 3D part packing -- anything other off-label uses for Blender?
http://amosdudley.com/weblog/Stochastic-Part-Packing | https://news.ycombinator.com/item?id=15187108
> We used Blender’s physics engine to pour parts into a virtual volume, then exploited its collision-avoidance behavior to re-sort them into a tight 3D packing optimized for overlapping concavity.
I really admire Blender's release notes. They're detailed and informative yet also fun and easy to read (thanks to pretty pictures!), and full of real working examples if you want them. It's nice to see more projects doing them these days (GNOME's have been great as well), but Blender's are definitely the coolest. They always make me excited about the project itself :)
I've found the denoiser to be a game-changer for still renders. I can now get a decent-looking render in a fraction of the time. Remember, "just let the render converge" can take hours or days.
The improved tone mapping and uber shader look awesome!
Anyone know whats up with the surface deformer? I'm curious how they are doing this warp.
The denoiser looks cool, but without temporal coherence won't be terribly useful out of the box... for a single freeze frame just let the render converge.
Blenders such a great tool! Just donated and created a quick-start video for anyone interested: https://youtu.be/q0PMNISK0KY
[Edit, somehow dropped h from [h]ttps in link)
Been using blender causally for 18 years. It is my favorite free software project.
Network rendering is being worked on:
No updates on distributed network rendering, or am I missing something?
if I am reading things right the radeon vega 56 is always faster (sometimes significantly more, like in koro) than an nvidia 1080 for rendering, that is quite surprising.
> Is it the whole GIMP vs Photoshop non-debate?
I think it is not.
I would say Blender is at professional level.
It is not like GIMP or Inkscape that I can hardly call them professional.
Many Blender users coming from other 3D sofware often said that Blender UI is weird and stop them from using it further. I guess it is a matter of habit.
>Many Blender users coming from other 3D sofware often said that Blender UI is weird and stop them from using it further. I guess it is a matter of habit.
That's the same feeling I had of Blender, hence I put them into the category of GIMP.
The problem with blenders UI isn't that it's bad as such, it just does everything different to other programs.
It's worth the effort to learn though, it's a hugely productive UI once you get used to it.
I've been out of the 3D scene for years now. I have never used Blender but have used 3DsMax, Lightwave.
How does Blender compare against contemporary 3D suites? Is it the whole GIMP vs Photoshop non-debate? (I can't stand GIMP BTW)
Cycles is the only significant OpenCL application in open source that I know of.
Keep up the good work, team! :)