Infinity House
Our latest project is called Infinity House. It is a family house that can be built in different environments. It has a mechanical system to create a private space with the help of circular elements.
Access the Best Articles about Architectural Visualization. Learn about all aspects of crafting images that tell stories.
Making Of's Case Studies WorkflowsShare your work and get immediate appreciation through discussion, feedback, and a possible nomination for the…
Submit Your Work Explore ShowcasesA weekly experiment, exploring the creative minds in Architectural Visualization and more. Find out what makes us all tick and push the limits.
Listen Now! Subscribe on iTunesOut with the old and in with the new! In Converted, I’m asking you to take an in-depth look at existing architecture near you or one you love worldwide and introduce something new.
See Entries & Join! About ConvertedOur latest project is called Infinity House. It is a family house that can be built in different environments. It has a mechanical system to create a private space with the help of circular elements.
Architect/Designer: Olivos Design Studio
Location: USA, Colorado
Project year: 2023
Visualization: Mikhail Sizou
Software: 3ds max / fstorm render / itoosoftware
Architect/Designer: Olivos Design Studio
Location: USA, Colorado
Project year: 2023
Visualization: Mikhail Sizou
Software: 3ds max / fstorm render / itoosoftware
The annual ArchViz BlackFriday to CyberMonday promotion is underway with 17 Partners early to start this time around. Many more to join in leading up to the official Black Friday date – November 24th. Expect up to 50% Discounts on software, content and training going on well into (3)December.
Visit the special landing page prepared this year to showcase all the great deals you can grab immediately!
Check deals that have already started by :
Featuring Globe Plants front and center this year, they made big strides to support as many platforms and formats! Believe in the fusion of art and technology, they have a vision to provide the most lifelike, detailed, and scientifically accurate 3D plant models, bridging the gap between virtual and real-world landscapes.
With over 15 years of professional Architecture Visualization and Landscape Design experience using GrowFX, Globe Plants team combine those years of experience to produce the highest possible quality of 3D digital plants and trees to almost indistinguishable to real life.
Globe Plants 3D foliage models will guarantee to accommodate for highest level of production.
Bring your interior renderings to life with new 4D models of people in everyday household tasks. These photorealistic human characters are designed with advanced scanning technology, making them the perfect complement to your digital scenes, animations and visualizations. With AXYZ 4D people, you can easily incorporate a level of realism and emotion that truly enhances your projects. Whether you are a filmmaker, game developer, or designer, AXYZ 4D personas offer unmatched quality and versatility to help you achieve your creative vision.
More deals open up towards Friday the 24th so keep an eye on the promo page at – https://blackfriday.ronenbekerman.com/
3d visualization and animation ‘House of nature’
Ronen here from Gen-X saying that we are currently in Generation Alpha, which is damn close to being called Gen-AI if you ask me and see what’s happening online. Crazy, fascinating, scary, mind provoking, unethical, the next evolution. You name it, and you’ll be right.
let’s dive into the rabbit hole – blue pill or red pill?
I’ll be writing this article and following up on it based on my experience as I go. Don’t expect any structure to it. Things change too fast anyway.
Sep 12, 2024: Adding a super cool experiment in which two AI speakers conduct a conversation about the article I wrote!
The idea that humanity might be living in a simulated reality gained significant attention in contemporary times due to the work of philosopher Nick Bostrom. In his 2003 paper, “Are You Living in a Computer Simulation?”, Bostrom presented the Simulation Hypothesis, which posits that we may be living in an advanced computer simulation created by a highly advanced civilization. This idea is not new, and you can trace it back to ancient philosophical concepts like Plato’s Allegory of the Cave, which can be interpreted as an early precursor to the idea of “living in a simulation” in the sense that it explores the concept of humans being trapped in an illusory or limited perception of reality.
In the context of generative AI, these philosophical inquiries become particularly relevant. As generative AI models advance, they can create increasingly realistic simulations of various aspects of our reality, such as images, sounds, and even 3D environments. These AI-generated simulations may eventually become indistinguishable from the real world, leading us to question the authenticity of our experiences and the boundaries between simulation and reality.
There’s no need to venture into the realm of the Matrix to grasp the concept at hand. The mere presence of advanced text-based AI agents, voice AI agents, and visually stunning AI avatar agents operating freely in our digital environment is sufficient to illustrate this idea. Indeed, such technological advancements are already taking place, transforming our interactions and experiences in the virtual world.
Generative AI has been a game-changer in numerous fields, but its rapid advancements have also raised several controversial topics and concerns. Some of the most prominent issues include:
These controversial topics highlight the need for ongoing research, dialogue, and collaboration among various stakeholders, including AI developers, policymakers, and users, to ensure that generative AI, or what is now widely discussed as artificial general intelligence (AGI), is developed and deployed responsibly and ethically.
I’ll address these topics in follow-up posts.
You know that feeling when you’ve got a super cool idea in your head, but you just can’t get it out for some reason? Like, maybe the process is a drag, you’re not in the right headspace, or the tools are just annoying. Well, guess what? This new AI stuff is changing the game big time! No more excuses – it’s time to unleash that creativity!
Generative AI refers to a subset of artificial intelligence that focuses on creating new content, often based on a given dataset or a set of parameters. These AI models learn from existing data and generate novel output, such as images, text, music, or 3D models. They can augment human creativity or automate specific tasks, making them an exciting tool for artists and designers.
Some of the most popular generative AI techniques include Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Transformers. Each of these techniques has unique characteristics that can be harnessed for architectural visualization.
GANs consist of two neural networks, a generator and a discriminator, that work together to create realistic output. The generator creates new content, while the discriminator evaluates its quality. GANs have been used to generate photorealistic images, textures, and even 3D models, making them an invaluable tool for architectural visualization artists.
VAEs are unsupervised machine learning models that learn to encode and decode data, enabling them to generate new content based on a given input. VAEs can be used to create variations of existing architectural designs, interpolate between different styles, or generate new design elements.
Transformers are a type of deep learning model that has demonstrated remarkable capabilities in natural language processing and image generation. They can be used to generate textual descriptions of architectural concepts or create visually coherent images based on a given text input.
So far, I’ve been exploring and using DALL-E, Midjourney, and Stable Diffusion for generating images. I’ve added ChatGPT into the mix very recently as a way to get better at prompting.
“prompting” refers to providing an initial input or a seed to an AI model, which then guides the model to generate new content based on that input. The prompt serves as a starting point for the AI model, shaping its response or output in a specific direction depending on the desired outcome. You can also use images as a prompt or as part of a prompt.
Prompting seems to be a rising skill, with talks about “prompt engineer” being a new job position created by AI already. Sure enough, you can already see such job posts published online.
And below you can see the result after several generation frames as I was aiming for an “epic” scene! Each new square with a prompt to generate the specific things I wanted in the “tile”.
Below are the very first images I’ve made with Midjourney a year ago!
And below a few from the recent month!
Boasting unparalleled flexibility and the ability to run on your own hardware, this particular generative AI tool offers a wealth of plugins, addons, and customization options for controlling the output, most notably through control nets.
I primarily use this tool for intentional generation, taking advantage of the numerous models available online or even training it myself. Furthermore, it paves the way for creating animations, an area I’m particularly eager to explore in the near future.
Or the modern-day equivalent of the California Gold Rush of the 1800s.
You can joke about it all you want, but you need the [blank] prompt to get the desired result. And I keep it blank since it is not “right,” “correct,” or “best,” to name a few options. It is undoubtedly something, but that is for you to figure out!
Much like digging for gold, the quest for success has remained the same: a combination of hard work, skill, and a bit of luck.
That was the point I decided to bring in ChatGPT. I’ve been conjuring up countless prompts on my own before that, but I decided to get more methodical about it.
Taking it for a spin in the #ChatGPT + MJ process I’m exploring. My aim is to make this look more like an architecture photographer professional has taken the photos. You can indicate a known photographer for the style (not sure how ethical that is, but it works) to get very interesting results.
This is totally text-to-image, and MJ decides about the output, but there is a lot you can do in a prompt to direct it.
Midjourney is far better (in my tests) than stable diffusion for generating the first result.
The first (left) one is the result. The second one is an old matte paint I did. The process was feeding the matte paint to midjourney new describe feature to get some prompts, taking that to ChatGPT for making a better version to feed back to midjourney, using the original image in the prompt as well, upscaling a favorite result and back to photoshop for final edits. Overall “title” of the image remains the same for me.