Skip to main content

Introduction by Professor Michael Ostwald, Professor of Architecture at UNSW Built Environment

Good afternoon. One of the problems, if you're used to lecturing to very large groups, is you're given a microphone, and you sort of blow the back wall of the lecture theatre out. So, hopefully you can hear me, but I'm not deafening. Welcome, ladies and gentlemen, to today's Learn@Lunch Lecture Series. This is a series of lunchtime talks run by, or featuring, the University of New South Wales' top professors and researchers.

My name is Michael Ostwald. I'm a Professor of Architecture and Associate doing research in the faculty of Built Environment of the University of New South Wales. And today, I'm standing in for our Dean of Built Environment, who's overseas, but it's also appropriate for me as I work closely with our guest speaker, Hank.

Before I start, I'd like to acknowledge the Gadigal People of the Eora Nation, the traditional custodians of the land, and I'd like to pay my respects to the elders, past and present. I'd also like to cite two quick notes in advance. We are recording this for a podcast. So, please turn your mobile phones to silent, and hopefully when you listen to this later, there won't be any ringing in the middle of it.

Today's speaker is Dr Hank Haeusler. He's Discipline Director of Computational Design in the faculty of the Built Environment, and he's Associate Professor in my faculty. He's a board member of the Institute of Media Australia, and professor in the Visual Art and Innovation Institute in the Central Academy of Fine Arts, Beijing. Prior to his role at UNSW, he was a Chancellor's Postdoctoral Fellow at UTS in Sydney.

Dr Haeusler is known as a researcher, educator, and entrepreneur working in media and computation in architecture and design. He's the author of nine books, many of which I recommend you look at, but my favourite was always his book "Media Facades: History, Content, and Technology." And his latest book is "Computational Design: From Promise to Practise," which I haven't read yet. He's lectured in Europe, Asia, North America, and Australia, in universities including SCI-Arc in Los Angeles, ETH in Zurich, Royal College of Art in London, and Hong Kong University. Sorry, University of Hong Kong, different thing.

Today in his lecture, he'll talk about architecture in the age of artificial intelligence. In particular, Hank will discuss how his work in computational design is helping to propel architecture and urban design into a future where they can access new technologies. As a brief diversion, when Hank and I were students, I'm a little bit older than him, there was a major debate raging in the built environment disciplines about, "Would artificial intelligence change our world completely, or would it just assist us in different ways?"

And as time has gone on, those two polar opposite positions have gradually begun to merge into an interesting middle ground to do with machine intelligence, and that's what Hank will be talking about today. Hank will speak for about 40 minutes, and then there's an opportunity for questions and answers. So, please join with me and welcome Associate Professor Hank Haeusler.

3:35 Learn@Lunch presentation by Associate Professor M. Hank Haeusler

Thank you, Michael. Thank you for the very warm welcome. That was Michael. That is me. That's the correct spelling of artificial. In case you haven't really noticed, it was the first test if you've been human or machine. But thank you for coming. Can I quickly get a sea of hands, how many of you are from an architectural background? Yeah, a few. What does the rest of you? Business, building? Good. Thank you. Some engineering. Perfect.

So, I'm going to present two projects today and some theory. Topics I'm going to cover in my talk are Second Machine Age. What is the Second Machine Age? How is the Second Machine Age defined? We live at the moment in Second Machine Age. I'm going to talk about synthetic design methods. It's a method I'm developing at the moment on combining machine learning with computational design. I'm going to present Giraffe Platform as a tool to operate synthetic design. And I'm going to show Urban AI as a demonstrator for urban design, how synthetic design methods could be applied, and then show a student research project on more architectural scale, how a synthetic design method could be applied.

So, first machine age and general purpose technologies and architecture. Let me quickly explain to you what general purpose technologies are. You all use them. You all have used them, and you all will use them. General purpose technologies are usually technologies that have more than one particular purpose. So, for example, the combustion engine, electricity. They're not just used for one particular thing, like a hammer. It's also a technology, but a hammer is only a hammer, but not really something else. So, all those kind of technologies have a profound impact onto our society, into our life in terms of how businesses are transformed, how businesses are operated. And they're more or less going to change the whole way of living in its time.

And I quickly want to show you how First Machine Age technologies and architecture change architecture because a lot of people don't really see it really, the technology change architectural urban design. So, I'll give you quickly two examples to outline that.

That's a general purpose technology you probably all have used before. It's a combustion engine. So, without a combustion engine, being German from Stuttgart, no cars. Without cars, you wouldn't have urban sprawl. None of us would have been able to drive or walk into the city from Parramatta or Penrith this morning and work in here and drive out in the evening. The urban sprawl is only feasible, only possible, because of the invention of combustion engine because we can commute now a longer distance.

Second example, and we all experience the second example when walking in here: electricity. Without electricity, you wouldn't have two things: one is deeper floor spaces because without electricity, no electricity light. Without our light, you wouldn't have a deeper floor space because you couldn't really bring the light into the building. And the second one is this little intervention. It's an elevator. What would not happen without elevator? Well, you wouldn't have been able to walk to the, I don't know, 114th floor in a building. So, without the general purpose technology electricity, you wouldn't really have a elevator. Without elevators, you wouldn't have high-rises.

So, with those two examples on you can see how important general purpose technologies are for architecture and urban design because they enable architecture and urban design. Without them, we would have lived in different cities.

So these are First Machine Age general purpose technologies.

So there is Second Machine Age purpose technologies described by two MIT authors in digital economics called "The Second Machine Age." Highly can recommend that book. Very, very interesting book to understand how a Second Machine Age technology could change the way we live. And if you read it with the eyes of an architect or an engineer or a builder, there are lot of insights in there that just start making you think about what and how your profession will change.

So what are the Second Machine Age general purpose technologies? Big Data, social media, digitalized information are ones that change profoundly the way we do things. Internet of Things sensors is another one. Moore's Law. Moore's Law is a law that says every 18 months, your computer is as double as powerful as the iteration before. So you have of something intelligence now. In 18 months, it's double its intelligence and then doubled again, and you've got an exponential curve of intelligence. Exponential curves are a very, very common feature of the Second Machine Age.

Same with data. We produce now within a few days nearly double the amount of data than we've done before. So, the amount of data produced and digitalized will increase a lot. And then, obviously, machine learning and artificial intelligence are another kind of feature that everybody of you are using constantly. Voice recognition, so the other kind of feature. Dating, if you've been Internet dating, all done by artificial intelligence, machine learning, and other things.

So, I'm 100% certain, I would bet my farm, and I own a farm, on it that the Second Machine Age cycle will also influence architecture dramatically in the same way the First Machine Age cycle happened. So, there is probably an argument to say that architecture has changed in the past because of digital technologies, general purpose technologies, and it will happen again in the future, and it will happen very, very quickly. So, in my lifetime ... the Internet has happened in my lifetime, a lot of other things. Intelligence have happened. And as long as I look into those kind of things over the last 10 years, in my profession I can see a very, very rapid rise, where certainly there are things that have been deemed to be impossible a few years ago are possible now.

Quick questions. How old is the autonomous vehicle? Five years, 10 years, 20 years? To my knowledge, I could be wrong there, there was a DARPA challenge. DARPA is the defence force, a research organisation of the American Defence Force. I think 2008 was the first challenge where they tried to drive autonomous vehicles through the Mojave Desert. It was a big disaster, as car didn't really start. It didn't really know where to go, and not many cars arrived there. And effectively now 10 years later, you've got autonomous vehicles driving around in cities already. So, just 10 years from not working to completely working.

So, next one. Let's go into synthetic design methods and explain what synthetic design methods are. Synthetic design method is for me a method where you can combine the power of machine learning with computational design. Also, an optimised design workflow in and for the AEC industries. So, we want to use machine learning due to its ability to predict data outside of existing data sets. So, machine learning is not really able to just take data and use data and then, within the pool, of data finding which is the big data set. It's also able to predict data outside that set by training on one example, knowing whatever happens.

So, I'm not really sure how much you're familiar with machine learning. So, I'm going to give you a quick, a brief, explanation through a video clip in a visual introduction to machine learning. It might be a little bit too quick, but you can look at it by yourself. The URL is on top of the video clip. So, you can type the URL into your computer and then re-see this thing again.

So, what you can see here, it's just what are some first introduction of a problem, and that is described here, and then in that kind of process explains to you how machine learning goes. So, pay attention to the text, read the text. Read fast probably, and then you can see, as well, about how machine learning is set up. So, you draw boundaries in machine learning, and then you're going to have a scaffold/matrix where you train with the data you have with the known data, a system is a training model.

And then you enable decision trees. So, decision trees are enabling the machine learning system to make decision to learn... So, you find boundaries, and then we're starting forks. So, decision forks. These are certain if-then statements. Anyone who knows computer programing, if-then statements are very, very common statements in computer science where it's if something happens, then do this. If not, do something else.

You then put certain kind of traits often there. You give the computer information to make decisions. False negatives, false positives. Helping the computer to understand what certain kind of data sets are. And need to start with recursion of training the data. So, you're going to start creating branches, decision kind of branches, based on that kind of model. So, elevation, price per square metre, price, growing a tree, again decision branches, decision branches. And as more branching you introduce into machine learning is more accurate the model or the prediction will be. Same for you, as more information you have on certain kind of topic, it's more precisely you predict something.

And then you train the model. So models. You train the model by existing data. So you say here's something I know and based on something I know, I can just train myself now, I can train the model, and then test if there's something that you know is already happening. And then you can go with that kind of known set into an unknown set, and you can use tested data that hasn't really been applied or used before.

And sometimes you don't really meet the exact kind of numbers and then you retrain the model again and fine-tune the model until you get to a certain percentage that is nearly to 100%. So, again, the top URL here, R23D, is the same kind of thing. It's very, very interesting kind of way of explaining machine learning to a non-computer-science audience.

So secondly, computational design processes. What you see here is a computational design tool. It's called Grasshopper. For some reason, oldest computational tools have all animal names. There's Rhino, Grasshopper, Kangaroo, Galapagos, and so on. I'm not sure who picked up those names, but they somehow stick. My students, when they design tools by themselves, name them Dog and Mink and other kind of animals, probably be with the proximity to the aquarium.

And what we've been able to do in computational design, instead of drawing a line with your computer if you do at the boundary, you click and on your screen and you define a point, and you drag your mouse over and click another point and that defines the line. You define a line mathematically. Well, a line is defined by two points. A point is defined by three variables, an X, Y and Z variables.

So, if you say, "Here's a little container," these are called containers, and the kind of container has the information of a point, and you've got an information, a number to that point, and that number can change in any kind of way, the point can change. And if those two points are connected to form a line. If you change the X variable of that point, you can change therefore the line, and the whole thing becomes parametric, dynamic, and alterable.

So, it is more ... The computation design is not really a new way of drawing. Architecture is more a new method, a new thinking process of how design could be perceived because at the end you don't really care so much about the outcome. If a point can move in any kind of coordination, any kind of location, the outcome is variable. So, the outcome could be anything. It's more the process, the designing of the computer programme, that is the focus on because then later the design is something that comes second.

So what we want to do in the synthetic design method is we want to link the predictive data generated by the machine learning. So we're going to take data. The first example predicted out of machine learning and combine that with geometrical views. Machine learning quite often produces data. So it produces new data out of existing data, but it's not really able to produce built form. And obviously as an architect, as a design engineer, we're interested in geometry and we want to produce geometry, so we're going to link both things together.

So, computation really for optimization, for form finding, for geometrical representation, electrometric design versus machine learning as a field of study, if all the …study specific models, performing tasks with our specific patterns, or the development of algorithms that improve automatically through experience. So let's say that the framework we've been looking at the moment.

So, what is the problem with machine learning and computational design, particular for the AUS Industry? Well, in the one hand there is a certainty that artificial intelligence will come. It's like a bushfire. Randy Deutsch is an author who just recently wrote a book about super-users and that's a quote he said at the beginning, "It's just coming." It just ... It grows so quickly and it's just so rapidly approaching the AUS Industry and any other industries, that it's just coming there. So, we have to prepare ourselves.

But unfortunately, on the other hand, there is a problem that not many in the AUS seem to pick up individual technology, that the architecture, and instruction, and the genome technology is very, very resistant. So, this one, unfortunately, is in German. Easy for me to read, difficult for others. That's the construction industries. 79% of all industries are still 1.0 or 1.2.0 technologies. So, assembly line principles, or digitalization of processes. But nothing like Industry 4.0 standards.

I'm not sure if you've heard about Industry 4.0. It's Industry 4.0, the new standard where you merge physical and cyber systems together. So working with robots, in factories for example. Quite often used now in car manufacturing and others. It's nearly non-existent. It's really traditional. Here's a brick, here's a builder. I put a brick on top of another thing. So, there's a huge in disruption potential in the AUS industry.

Why is it a problem? On one hand it's very, very urgent that artificial intelligence, machine learning comes into the industry. But on the other hand, nobody has really picked it up. So why is that the case? So then we can play a little bit of Mexican La Ola and you can see how we potentially can solve this. Do you know what a Mexican La Ola is? I always like to play that because it's lunchtime, and some exercise is quite useful. So, if you see something that applies to you, you put your hand up and you leave your hand up. And if you don't apply to you, put your hand down.

So, hands up if you're an architect or designer. Mm-hmm (affirmative). So, leave your hand up if you can use Grasshopper. Goes down. Hands down if you can use Python programming. Coming down. Now we get it. I get all hands down, trust me. Hands if you can programme in C Language. Now we've got them. And that's normally puts everybody's hands up. Hands up if you understand the algorithmic process that enables a Google search engine. But then you can bring your hands up again. Hands up if you've been able to use a Google search.

So, interesting. You can use C programming, and you can use an algorithm for a Google search, but you have no idea how it actually works. So, why is that so? Well, there's a system that transforms nearly all industries. More, it's all kind of interesting, called a platform revolution or a platform system. A platform is what Google ... The Internet, effectually, is a platform. On the one hand of the platform, you've got users like me doing a Google search. And on the back end of the platform, you have some guys in Silicon Valley who've been able to write and see algorithms that enable the Google search. And the interface brings both parties together.

So you actually don't really need to understand computer programming languages and algorithms to use certain things. So, one of the most common-used platforms is Uber, Airbnb, which again brings two different parties together with two different skillsets. I can use Uber as a driver to drive people around, if that's my skillset. Or I want to be driven, and I'm going to use Uber as a platform to connect me to drivers. So, anyone in business here? Platforms models are probably kind of a common-knowledge thing. And, arguably, platform will erase more and more traditional pipeline value changes through a network value change.

So, if a platform enables a combination between computation designers, or people who can do computer programming, with non-computation designers, maybe a solution to the problem in the architecture engineering industry is a platform. So we developed Giraffe as a platform. So Giraffe is a two-sided network platform for the AUS Industry that, through a front-end, really brings together computation designers on the one side and architects and urban designers on the others, where both can work together, but both stay within their domain.

So Giraffe is browser-based, so we model on the browser. So you can model on the browser by either modelling directly onto the browser through polylines or upload your architecture model, your intent, out of your native software. But then, similar to iTunes, we've created an app store in Giraffe where computation designers can upload their app. So there are workflow. There are automations to do certain things, certain kind of tasks in the AUS Industry. So you can access it.

So, similar to your iPhones or Android devices, you've got a device where you, as a user, want to do certain things, and you can search through certain apps that enable you way-finding, a map, health, certain kind of Fitbit functions, music players, and you can download them, and you can personalise your device, your iPhone, through the tasks you have in your day-to-day life. So the same models for the AUS Industry, you being a structural engineer, and you want to do certain structural things, then you would download a series of apps somebody else have written. Computer programmers, computer scientists, computational scientists have written onto your platform, and then you click on them, and you can operate them without really understanding how the whole thing actually work.

So you can harness the power of the crowd. So we don't really, with Giraffe, don't really design all the apps by ourselves. The crowd will design those apps, in the same way Steven Job and Apple did not design all the apps on the App Store. They're just going to say, "Here's a platform. You can upload your apps onto the platform, and you do the development." Because then you come up with the idea that with 150 characters, you can define a communication system that is later used as the main form of communication by the U.S. President. Twitter. You know, who would have thought about those kind of things?

So we leave the cloud's crowd to develop things. That might sound similar to platforms like HitHap. HitHap is a platform where computational scientists and computational developers can upload their computer programme for sharing. But, again, it's the problem if there's a computer programming C language and Python, most of us can't really use it. So it's useless. So you want to have a platform where, on the one hand, you've got users who can write you the computer programme, package them up. But, on the other hand, you can lose them, phase out.

So I'll give you a quick demonstration on how Giraffe works so you understand what I really mean through that example. That's how Giraffe look like. There's a giant map of the world. So you've got nearly every project on the world on it. You can then display your building. So you can see here from Sydney, some of those buildings are being generated by the crowd, as well. Or you can upload your context. And then you can start drawing. So you've got to draw here your site. So, my site is 8 Achandos Street, that's the location. So, you define the project. You specify the project. And then you start modelling on it.

So you're going to say, "I want to model something here. And I'm going to model so I can later evaluate and test what kind of a solar impact my building has on that park." So the solar impact is only one exemplar for an app. So it could be thousands of different apps, as well. And you can see how relatively easy it is to model. Some of you have modelled now here. A first space as you define it as retail. Here you define the typology and then the model again. Something on top of that that you can alter and transform relatively easy if you choose to model it through the browser. But you also could upload your SketchUp file, your Rhino file, you name it, would all work.

You can change in the floor heights to different other things. So very, very quickly, if you generated here, on-the-go, a building, then you're going to go to the app store, and you choose certain apps from the app store, and you can download them either for free or you can purchase them, depending on the ability or the willingness of the app developer to give away his or her intellectual property. And then you model a context around the site, next to the park, where you're going to do the solar analysis. So define it as a landscape, give it a colour. Grass is green. Can't make it completely flat there. Have to give it a little dimension so it understands it. And then, what you can see here, you're just going to click on "run." That's the only thing you have to understand of how to do a solar analysis in Grasshopper.

Takes some time. So the computer requests now the information from a server where it gets hosted. And depending on the complexity, and depending on your web server, it goes quicker or slower. At the moment, we have it still on a local server, so it takes longer than it should be. And it just gives you a report back. So for all your solar analysis, I can do now a solar analysis for any location in the world with that script.

So you might have remembered recently the discussion on the "Sydney Morning Herald" from Elizabeth Farrelly on overshadowing the proposed seat in town hall square. You could've done that in an instant. So I've just done it in half an hour in the afternoon, just to check if actually the claim is right or not right. What I also have been able would have been then to say, "Well, if you don't want to overshadow it, I change my geometry. So I move the geometry, the building of, it around." Maybe you can make it a little bit slimmer, move to the left or right, maybe kind of higher, so it doesn't overshadow. So I think that's really the new thing for a synthetic design.

So let's go into the synthetic design with a different demonstrator on urban design. So, we at the moment, with Urban AI, have a partnership enabled through a sponsorship from the Urban Development Institute of Australia, City Life Lab, between UNSW, Lendlease and Cox.  We tried to test the synthetic design method of machine learning together with computation design. So, again, to recapture machine learning is really the use of smart algorithms for searching of problem solutions, whereas machine learning is using smart algorithms to understand big data. Quite often, machine learning and AI gets mixed up and becomes one of the same part, but it's actually different.

So Urban AI, really, is applying machine learning and AI for urban design and development. And, well, why should you care? It could help you in crafting future cities' development at scale. That'd be the method. We've got data. We've got machine learning and machine learning model. Out of the machine learning model, we can run a computational model, as well, that takes rules, predictions. Rules, such as setbacks, such as building heights, such as design intents of urban heat islands, such as you name it, and bring both things together and then visualise them later in Giraffe.

So this is the kind of context. It's actually quite interesting when you look at the kind of context, and you understand the scale. Because how big is that area to design, really, in comparison to something where we are? So the actual ... Probably, this area is roughly the area from Bondi to Leichhardt, from the CBD down to Maroubra. So you want to all design that within 20 years, 20, 30 years. And you've only got one chance to make it right. So cities don't really grow slowly, as they've done in the past. It's just really massive, very, very quickly.

So we also argue that it's probably kind of difficult to do that all and understanding the complexity of all those things as a human. I think it's just beyond human comprehension to design those big cities that quickly. And we have to design them that quickly to fulfil population growth and other needs. So I think machine learning can assist us here in giving design suggestions to humans.

So we developed a predictive model of property price and tested those values for potential future market space in and around Western Sydney. So, we used around three million property sale data. So whenever you bought a house, or you sold a house, or you did something with apartment over the last 17 years, we probably have your data. And we used those data to train the model so we understand, through the training of the model, if we've been correct in our modelling. So you can model, then, what has happened in the location there and then through that training, we then also can focus what happens in areas that are not really populated yet.

So, we've been able to predict land value of non-existing properties. So, if we have the same feature of a land in terms of proximity to train station, to schools, the level of bathrooms, and other features that we list in a kind of model, we can tell exactly how much a property in Western Sydney on the area they've not really divided yet. Yet, while we can predict the land value of non-existing properties, there's a relatively small margin of error. So we've just been relatively ... So we've been getting smaller and smaller. We just operate on the 10% error at the moment, but I think we are going to get it further down to definitely under, 5, 4, 3%, which have been relative close of understanding how much our value is done.

We don't really know yet what the land parcels are. So, the computer programme, the machine learning, can give you the value of a property, but the property, of course, does not reflect yet a normal property in the sense "Here's a street. It's four metres wide or seven metres wide, and 20 metres deep, and it has a house on it." So machine learning will only give you information through statistic models. It doesn't really give you design. So we will have to add to design part, the computational part onto that to design something. That's really what a synthetic design method is.

So, that is our area. That's the area we're looking to. It's quite big. It's going from Campbelltown down here up to Colyton, Penrith area. So it's a huge area, and let's quickly take it out of context. So, that's our area of context. That's what it is, at the moment, the land parcels. We then add the proposed rail corridor because it's a transport-oriented development project. And then we added stations. So we add 11 stations here at the moment, but we don't really care how many stations. A parametric model.

So it's not really that this is drawn fixed. It's just an information, a verbal, where we say it's 11 stations. And if you change the number 11 to 65, there will be 65 stations in here. And if you have only one station, it's only one station. Remember when I explained to you the computational design model before with the two points, how points, two points, define a line? A point is defined by X, Y, and Z variables. If X, Y, and Z changes, the direction or the location of the line will change, the length of the line will change. Same here. So we can change things.

What we also can do, then, is we concentrate on one location, but from a computational power, we can design the whole thing in one go. We're just going to go in reducing the complexity and the standard from larger to smaller, to smaller, to smaller. So, from a whole city to a precinct to a house. But a computer is able to do all in once. It's just a matter of computing power, which doesn't cost much.

So with the zoning in a two-kilometre around the point. That point represents the train station, and around that, then, we're going to start designing. Developed a script that can give us a street layout with different densities. So we've got low density, we've got medium density, we've got high density, and very high density. So instantly, we change a variable. It just becomes from low density to high density to medium density, and so we can operate all things for that. We then can define another zone in there, which is a 800-metre radius around the train station, which is normally the travel precinct, where you're going to walk into the station and then use the station as your main mode of transport.

Again, we just draw a circle around. We know there are isovists existing, and isovist is a path that takes the actual way you have to walk. Because 800 metres is the growth lifein this example. But we also can do other things. We can add more and more computational tools onto the system if you want to suit your ... And then you can start population the cities by again applying certain kind of rules. We didn't really include any parks at the moment simply because we didn't have the time to do so. But you can add parks, you can put any kind of intent onto that.

And you've got in yellow houses, red multistory, blue high rise, and then you would have a city like that. Also, if we're going to go further into design, a city is not really made out of, whatever, envelopes. It's made of actually building. So we looked into, again, for one particular building. The green building, how we can optimise the design of the building or designing a building deeper. Again, I show one building, but if you give me a decent computer and a little bit time, I'm going to design the whole north-south rail corridor with the constraints and the computer programme we've done so far. So it's just a matter of computing and computing power.

And we do the very, very simple formula. So any developer, forgive me for simplifying your art to just a little line. But, as a house owner, we face the same issue. Well, we know how much money we have, and we know how much a piece of land will cost and how much it costs to build on that kind of land. And if that kind of number ... So if the land costs $100 and the building costs $100, as well, and I've got $180, then I've got a problem, and I can't build the land. If I've got $200, I can build. So we, more or less, want to just start speculating and designing on that kind of level.

So we looked into some basic input variables a construction firm gave us and said, "Okay, there's a site selection." We normally, then, try to find the right mix of a residential building in terms of how much residential, commercial, the height, the car parking. So, we've got some benchmark in cost of how much something costs. And we took those variables. Again, it's very, very simplified, and feed it into a computer programme, a script we run, as well. So what you see here is an evolutionary solver.

It's an evolutionary solver. It takes ... I'm just going to go back again. Takes some input variables, like, for example, the price of the land. So that's the thing that comes out of the machine learning system, some floor heights, some other corrosive constraints here, and run it through a computer programme that says, "Okay, I'm going to try to find you the optimum solution based on the problem." So it seems like evolution does design trees, animal species, depending on the surrounding conditions. Over time, a computer can mimic time and develop you an outcome.

So we've been able, then, through a period of time to define, based on the price of land, an optimum design outcome for that kind of solution. So, conclusion for that project. We are not there yet. And we don't really claim that we can design a complete city. A city design, and any kind of architecture design, is a puzzle out of thousands and thousand of different pieces. And I think what we can claim that we're a percent of one or two of those puzzle pieces yet. So, it doesn't really mean that one of those puzzle pieces developed that humans have no longer involvement in the design. There's certain things a computer cannot do. A computer cannot understand the empathic needs of a client, of what a client wants to have. But, definitely, a computer can do certain of the things automated far better than the human could do.

So, we did a little test at understanding how many Grasshopper, how many computational scripts, potentially are out there. So there are 200 thousand users in the Grasshopper community. If you would argue that every of those 200 thousand users have 10 of those computer programmes stored on their desktop somewhere in a folder, we've got two million. If it weren't, we'd argue that half of those two millions are student computer programmes that don't really run proper, we've got one million. If we would argue that one of those one million, probably half of them are doubling up, we've got 500 thousand. If you really want to be critical and say, "Well, there have just been other kind of constraints," we bring it down 100 thousand or 50 thousand. There are potentially 50 thousand puzzle pieces already towards a city design or architecture design that already exist there.

So urban design will be not really necessary by machine, but I think one would need, by far, less humans to do that because if you once create that automated puzzle piece, you don't really have to do it ever again. And that's internal risk. So let me quickly check with time. How do we with time? Still got time? Two minutes? I've been told until half past. Well, then, in that kind of case, I'll stop the talk here. I think people want to have more. Keep going. Okay.

So, let's quickly go through another approach, and I'm going to show a few more there. It's an architecture project called Centaur Pod where we try to develop a pavilion, together with engineering in a student framework system to understand how machine learning and AI and computation design could bring together together on an architecture example where a student can learn things. So we looked into ... The first into biological role models to see how much we can learn from biology. We took the Mimosa Pudica, a plant, as a kind of role model. It is able to change its shape of the leaf by either pumping or sucking out of cells water, so the plant starts moving.

We then translated that into a moving pattern in an origami structure, and then scripted another origami pattern that can move. We then looked into a form. I actually got a designer come see how the whole thing could move. So the idea really is that this is a space here and that space can learn human behaviour. And based on the learning of human behaviour, can transform and alter itself into any of the other things. So we try at the moment is to just understand human behaviour. So we've got to test it hopefully, and most likely, at our Luminocity Exhibition at UNSW in August, where we capture people's movement through infrared sensors.

So we understand how people operate and what they look at the exhibition. And then, through that knowledge, the machine is able to forecast what behaviour of humans during the next day and the next day. Humans are very predictable. I can predict your cultural or religious background on your Opal data. Why I can do so? Well, if you've been at certain religious holidays, not at work, and there's a certain kind of pattern that you always think, for example, at being of Ramadan or the Hindu or Buddhist festival. Not at work.

There most likely the pattern in there that demonstrates that you've been in one religious group or another religious group. Also can predict and forecast if you change your job before you change your housing, or vice versa, by seeing on a Monday to Friday behaviour. If you move in the morning at 9:00 from Point A to Point B and you're going to go the same way back in the evening, then I can see your travel pattern, where you live and where you work. If, then, your Point B, the job, changes, then you've changed first the job and not the house. If Point A, the house, changes in that pattern, then you've changed your house first before your job. It's very easy to understand humans. We all behave in patterns, and computer scientists and machine learning looks at those kind of patterns.

So quickly going to go through because I've got, whatever, one minute count now. In our next course, then, we looked into human-machine interaction. We are very, very interested in soft robotics. Soft robotics is an area of robotics that uses primactic systems to change movements. So by just simply having an origami pattern, a six-set pattern, in an air bag and sucking the air out, as you can see here, you can lift up and can transform certain things. So it becomes a muscle. So in order to make a pavilion move, you need a muscle. We don't want to rely and respond on electronic engines or other systems because electronic engines are a bit tricky to operate. They're heavy. They're quite bulky.

So we're looking to soft robotics and artificial muscles a lot, and design with our students, then, artificial muscles. Reminds you about something? Aquarium octopus tentacle. Very, very ... Biology is a really, really interesting area to look into for design. So it's more of how an octopus tentacle could work. Great thing, you can prep nearly everything. So, Amazon Warehouse uses those muscles, at the moment, to grab any kind of object out of a shelf to put it in the cart.

We then just went, again, bigger. Now we've got a square pattern on the hexagon pattern to look into how those kind of pattern changes will happen. In another studio, in parallel, looking to a transfabrication and the structural engineering. So that's how the pavilion will look like. Lots of problems in between to make a structure that actually moves. Engineers normally make sure that the structures don't move. Now we want to come and say, "Well, make a structure move." That's quite interesting. So we did a lot of structural optimization and form finding. Again, in Grasshopper, modelling Grasshopper, understanding that pattern movement. And then, overlay that pattern movements into the shape of the dome for the pavilion.

Test and develop artificial muscles. A BIM class, as well, then looked into the cost schedule and the production management using a lot of hollow lens, so you can overlay information via AR. That's the further design … Little structural notes, and then in the fabrication. So the moment in the fabrication state, you've got three more weeks, two more weeks of the semester. So students frantically sweat-shopping, building those muscles. These are the muscles. We've got several hundred of those muscles.

And the muscles are able to lift up with a 60-millilitre syringe. You just press it in AR around eight kilos, five centimetres up. So they're really, really powerful. I was surprised. They tried to simulate the whole muscle movement in a computer, but it was too complicated for us. So we went into prototyping, just built a lot of those muscles and tested how they worked. And they, surprisingly, worked extremely well. We still don't really know if the structure actually stand up. We'll test it, as well.

And then did other things. So we used recycled bottle tops, shredded those bottle tops, and then fabricated with injection moulding. Got a component in there, as well. So we also do some material studies. So, hopefully, we're all going to see you at the lounge of that pot at the 7th of September at UNSW at Luminocity.

So, conclude the whole thing. What we've seen here with the students and with the urban AI project is mainly done by students. So it's not really ... There have been some aspects of the machine learning we've done with supervisors, with computer scientists, but also with PhD students there. And, surprisingly, most of the things you've seen here, we didn't really teach explicitly students. We teach them in combinations on the foundations of thinking computationally, and design thinking, and problem-solving. And they're going to come up and find things on the Internet by themselves.

While it is good for the students to do those kind of things, now just imagine if somebody with a professional means would do urban design using architectural and machine learning, who has already artificial intelligent and machine learning capacity in house, and has nearly unlimited funds. So there's a little company called Sidewalks Lab, who is designing at the moment, a location, a precinct, similar to Barangaroo South in Toronto, where they're trying to embed all the knowledge the company has and all the power the company has in terms of machine learning, AI, that's the side, into a development.

Any idea of what that company, the mother company, is of that location? Alphabet. Google. So Google is becoming an urban developer. So if Google becomes an urban developer and just spends a hundred million or so, which is kitty money for Google, onto an urban design project in Toronto, they don't really do it because they're bored. They do it for very strategic reasons. If you look into how many buildings you have to design for a population growth to 9 billion people, it's a huge market.

So if your market at the moment is saturated, and you want to transform the AUS Industry that does not use any artificial intelligence, digital technology, you not only have to understand the market, you have to test the market. And I think that's what Google is trying to do. So the bushfire I mentioned at the beginning coming with AI is very, very close for the construction industry. And I think that really there's need now to upsurge the skills and to improve your skills towards machine learning and towards AI and towards 21st-century technologies. Thank you.

 

50:35 Q&A

 

Hank Hauesler:        Yes, please.

Speaker:                  Would the tech benefit from being produced in a 3D visualisation dome where you could see about behavioural psychology and have people walking down through proposed streets and landscapes and then gauge their reactions? Would that be an extension of what you're doing?

Hank Haeusler:       Yeah, yeah, yeah, yeah. Visualising information. Yes, absolutely. I think what we've been keen on in visualising information is not on a desktop because not all of you have a computer programme at home on their desktop in order to look into 3D models. So if you get from the City of Sydney planning regulations, you've got a PDF and then you have to go through that PDF and try to understand it.

                                If it's on the browser, if you've got a 3D model of a city on the browser and you can see the design intent, everybody can access models on a browser.

Speaker:                  I'm not an architect but I'm very active in industry, in fact my trade now is the ‘property specialist’. Now, it seems to me that what you've been describing today is a sophistication of the town planning profession. But the thing that disturbs me in the property field is that, in particular in the residential sector, the exterior of the buildings are completely unattractive. It's not like the days of Federation Design and so on.

                                 At what stage are we at of the actual design of the exterior being, I think, better? Either by saying, "Okay, we'll have a Federation style" or ... This is my unsophisticated way of putting it. The buildings are ghastly..

Hank Haeusler:        So, one thing. A computer is yet not completely able to make aesthetic decisions. What is ugly and what is not ugly is something, for a computer, very, very difficult to do. So if a building is ugly or not ugly, that is just ... It's a human perception, and we can have a long discussion about aesthetics now. What is possible for a computer now is to learn certain aesthetic styles.

                                So there's this thing around called Generic Adversarial Neural Network, which you can train a computer to look into a Rembrandt painting, for example, and the computer is able to understand all those Rembrandt paintings and then replicate a painting that mimics the Rembrandt style. So it has been done with paintings, has been done with music. They trained a computer programme to listen to old Beatles songs, and then composed a Beatles song by itself that sounds like a Beatles song.

                                I think it's called "Driving my Car." If you type in "AI Computer Beatles Song Driving my Car," you can hear the song. I'm not a Beatles generation, so I can't really judge if it sounds like the Beatles or not, but it is. So there is a difference between what I've shown. I'm not showing something the computer is doing aesthetic preferences. An aesthetic preference or the ethics judgement of a design is still with the human, but it just enables to come to a design by far quicker, on a scale that is massive, through a machine.

Speaker:                  Hank, great session. I really enjoyed that.

Hank Haeusler:       Thank you.

Speaker:                  My question is with regards to the proprietary nature of some of the technologies that are coming out there, including Autodesk Generative Design approach. How does that apply to the Grasshopper approach? And then, secondly, the other question is, only if it's related, but what is the impact of what you're seeing in what's coming to the discipline of architecture as a whole, as a profession?

Speaker:                  And just a bit of background. I've been building technology companies for the last 25 years, and my passion area is this, so intersection is happening now, which I'm happy with.

Hank Haeusler:       Good. So I think with proprietary software like an Autodesk, obviously, they're trying to control another market. And what we're looking with Giraffe is more an open-source market where there's a platform and anyone can design something, a piece of it. We don't really need a software, per se, a software package. So, again, I think the Autodesk system is a pipeline value chain where we try to have a network value chain. So, I think there's an interruption there.

                                Second question … coming from a German background with car industry, everything for me is a manufacturing line somehow. So, if you understand the architecture profession, it's a linear value chain where you get a task by the client and then through various of steps, you produce a design building.

                                So there could be an analogue to an assembly line for a car. But at the beginning, you've got a task, design a car. At the end, you've got an outcome, a car. And along that kind of process are certain kind of steps that are manually and repetitive, and those kind of steps have been automated in the car industry.

                                And I think those kind of steps, as well, could be automated in the architecture industry. I spent two months in a firm in Chicago as an intern changing the ramp for the car park to optimise the amount of cars that are going to go into the car park. So you always have to do the whole package. You have to change the ramp. You have to change the floor plan. You have to change the section and then look into it. And then do it again, and do it again.

                                 So now, in the same kind of time, probably in a shorter time, in two, three days, I could've written a computer programme that does that automatically for you. And it's done. So my job of two months would've been gone after somebody for two weeks done that. So I think the impact there is so strong that you can work more efficient because you don't really have to do mundane things.

                                 If you really think about in the architecture industry, how much time do you spend on mundane, dull, repetitive information? You always draw the same columns, and you always move the same columns in each project. And that's not what you get paid for. You get paid for the design, so your design that you design looks like your own company. That's what you get paid for, not moving columns around or ramps.

                                 So, I would argue you free up time to do what you get paid for. You don't really do the mundane jobs anymore. That's for the computer. And you do the things the computer can't really do, designing Federation style or putting an aesthetic preference into a building, understanding what aesthetics actually is. So I would say if, for me, the architecture education of the future would be more understanding aesthetics, ethics, what are the empathic needs of a client, and then leave the documentation, the detailing, to the computer.

Michael Ostwald:     I'd like to thank Hank for his lecture today. Can we all please thank Hank?

Hank Haeusler:        Thank you.

Michael Ostwald:     Thank you all for coming along today. Hank and I are clearly sharing the same script. Our next talk in the Learn@Lunch series is on the board there: "Maintaining Your Brain: The Secret to Healthy Brain Ageing."

I'd like to encourage you to come along to that. We have the programme for the remaining talks. Would've been on your seat. It's also going to be updated to our alumni website at the university and you can also download the podcast there, as well.

Michael Ostwald:     Thank you very much all for attending and thanks to Hank.

Hank Haeusler:        Thank you very much.

Cover image
Hank
Featured
Off

Contact