Being able to place a project or model into the larger context of where it’s going to live changes everything. It offers the opportunity to better understand how the site interacts with its environment, with its neighborhood, and as part of the city as a whole. For this case study, we took on the task of usefully embedding an Archilogic scene into the context of a city; in this case, Washington, D.C. By embedding our 3D models in a 3D context, we were able to interact with them in a way that offered so much more ease, imagination, and usability.
For a lot of clients and users, it can be extremely difficult to imagine what it would be like for their project to actually live in the real world. This process of visualization, of genuinely seeing it in its eventual ecosystem is important because every project becomes part of the whole it’s being added to. This tool allows us to look at the whole while still being able to zoom in, checking out the city, then the neighborhood, then the block, then the building and, ultimately, picking a floor to walk around. It’s a far more comprehensive and dynamic experience of a model.
To create this level of interaction, we first pull data for the site/neighborhood/city open source Geo data We then combine it with an Archilogic scene for the project and we have an amazing use of Archilogic technology with other software. For us, we love utilizing open source technology and combining it with Archilogic to create a better overall (and more usable) experience for our clients.
To explore the model we created for a building in Washington, D.C., you can click here. To navigate: You first see the site as a hole, zooming in next to the neighborhood, then the block, then the building itself. From there, you can select which floor you would like to move around, double clicking to enter the floorplan.
To showcase its usage, we chose a building site located in Washington, D.C., and used blender to export a city map of the Capitol Crossing neighborhood, a location that connects Capitol Hill and Gallery Place in the heart of the city. With GIS and web geodata, we assembled the necessary geometry in react/three JS to model the experience of selecting a building and placing it in its native environment. In this case, a city block.
Once the building is placed within the block, the user can then select the building, then the floor they’d like to move through. Once they select the floor they’d like to explore, the Archilogic scene is triggered via the embed API. This model gives a visitor the ability to move from a macro to a micro visualization of their site, zooming from the city, to building, to the floor, and within.
For the frameworks and technical background that support the LOD Demo, we first create a new project by executing: npx create-react-app threejs-demo --template redux-typescript. This generates a new React project called “threejs-demo” with redux and typescript incorporated. To then integrate navigation between the different pages, we create a state machine using redux while also utilizing Ant Design. By choosing to use react-three.js-fiber as a wrapper, we’re able to use three.js with much greater ease.
After creating a project, we then convert a 3D model to the React interface by exporting a .gfts file that contains the model from Blender. This is processed by the command npx gltfjsx <file_name> -- types. This generates all Typescript Interfaces for the model, including: Camera, lights, mesh, and textures; all accessible in the newly generated .tsx file with no losses.
For usability and navigation, we then add drag and scroll gestures. These gestures modify the rotation and zoom parameters as set by the library. These drag and scroll gestures also work with the embedded Archilogic scence once loaded in the step “unit” we can navigate through the bookmarks by scrolling.