These design techniques pretty much require 3d printing for fabrication.
One wonders at the unexpected brittleness. Take that bike stem: they describe the automated design process as inputting the forces you wish to withstand and let the algorithm optimize. That implies that the final form has no validation for any other forces. The bike stem might be weak to a diagonal force.
I also see that the structural design presented are mostly variation around the simple known trick of triangulation. I'm not sure the extra optimisation is worth the more complex form: extra manifacturing complexity and problematic adaptibility to other parts you need for a completed product. Look at that car body: can you imagine the body work to make it into a real car?
I was seriously wondering about the metal one. Let's go link by link.
1. 3D Makeover Link for metal CAD work says nothing about generative techniques. It's about additive manufacturing" letting them do designs like in the photo.
2. The antenna link says it was a generative design by automated search and simulation. Describes Dreamcatcher system that does this with cloud computing. Gives example of roll cage that looks kind of like the bike stem.
3. Bike stem link uses Dreamcatcher to do a bike stem. The video below shows visually the optimization/design process in a way that reminds me of T-1000's liquid metal in Terminator 2.
4. Engine block link says they did a load-bearing, engine block in Dreamcatcher. No other info.
5. Meta models link just takes back to the page you're viewing. Cute waste of my time...
6. Intuition link doesn't tell me about any of these things. Instead, it's some kind of analytics product for enterprises. Sounds like a mini-SAS with Watson's analysis or Q&A.
So, there were some relevant links that were mostly Dreamcatcher demos. Ended with an irrelevant one that might interest enterprise analysts. This article's citations are definitely unreliable. It's mostly interesting artwork.
With the exception of the antennas designed with genetic algorithms, the "alien" aesthetic in all of these examples looks like the output of topological optimization tools. For example: https://www.youtube.com/watch?v=igRFFMSfwSQ
They emphasize that Dreamcatcher uses a "top-down" style of design, so maybe they are using deep learning for NLP to parse requirements and then feeding those requirements into normal topological optimization tools?
 https://ti.arc.nasa.gov/m/pub-archive/1244h/1244%20(Hornby).... (pictures from their post on page 5)
I would amazed if NLP that good existed.
Instead, I suspect top-down means something like "the part must span this bounding box, weigh no more than X grams, and survive the following forces..."
It's not obvious that these designs have anything to do with deep learning; it doesn't appear to be mentioned in any of the linked articles.
Why couldn't generative optimization be considered AI? Iteration leading to invention is a common pattern in human intelligence.
If it's just mutation then no.
And deep learning is just optimization which is just search, which everyone agrees is not AI. /s
This user rarely contributes to discussions and frequently gives short, useless answers like this.
Can I ask HN why this account hasn't been banned yet? I thought the behavior this user is demonstrating directly violates HN guidelines?
If you have concerns about a member, you can email the mods directly via the Contact link in the footer. This is more effective than commenting on it.
Man some of those examples taken from application of genetic algorithms, and iterative algos that are definitely not deep learning ... not everycomputer optimized design success is an AI success and not every AI success is a deep learning success
"Also, I'd guess this "alien style" has a lot to do with the user's choice of algorithm and representation."
I'd find that very likely. In the machine learning world the term "bias" means something more like "what the set of hypotheses the learning system can have is" rather than the human English definition, and my question would be, do the biases of the learning processes even encompass "fractal" solutions? They probably don't, because you would encode what is even manufacturable in the first place directly into the biases of the system, because otherwise the learning system is very likely to pop out an optimal solution that is not manufacturable at all. Nobody wants to manufacture a fractalline lattice, whereas given how Nature grows them they come up very naturally.
On that note, another thing you have to watch out for with these solutions is that you didn't miss an issue in your optimization. I'm looking at the first picture in the article, at the middle solution. It has a very fine mesh at the very top. I hope that either that mesh isn't all that important, or that we are effectively-100% sure this part is never going to corrode or experience other manufacturing defects (excessively large metal crystals perhaps) that could cause such a fine tracery of metal to be not work the way the model is expecting. (Unless there is some on the inside, the third piece doesn't seem to have that problem, it looks a lot more robust.) Of course there are places where we can indeed be confident that corrosion is not an issue, and the parts could be tested; I'm not saying that this particular instance is guaranteed to be flawed, just using this as an example of the possible issues with this style of design.
> this "alien style" has a lot to do with the user's choice of algorithm and representation
Very valid point: part of why they can focus solely on optimizing those objects is that they no longer account for ease of manufacturing. And they can do so thanks to 3D printing.
It's certainly cool that our computers and manufacturing capabilities are now able to pull off such optimisation, but this has been going on for a while; compare the box-like car bodies of the 80s to the curved aerodynamics of today.
Also, I'd guess this "alien style" has a lot to do with the user's choice of algorithm and representation. An alternative approach, e.g. building up by connecting tubes like scaffolding, or lego-like blocks, then I'd imagine the results may have a very different "style".
If you mean that Wolfram advocates programmatic generation of structures, then that's true; the approach is very different though. These appear to come from a continuous optimisation process, i.e. starting with a "bad" design and iteratively tweaking it. In contrast, Wolfram tends to focus on discrete systems (e.g. cellular automata) and perform the search interactively, like a form of superoptimisation rather than numerical optimisation.
The examples I'd cite are from the 1990s, e.g. evolved antennas ( https://en.wikipedia.org/wiki/Evolved_antenna ) and integrated circuits ( https://en.wikipedia.org/wiki/Evolvable_hardware )
Wolfram was there in 2002. Pretty cool examples though.