The Houdini network of the Sandcastle project is a reflection of the team’s division of work, according to people’s area of specialty. The big sections we currently have are: importing and parsing Supervise.ly data, auto-generating nodes in the network, creating HDAs for different types of buildings and objects in the map, handling irregular pieces of architecture not suitable for procedural HDA, setting up the camera, and projecting HDAs onto their correct locations in the third dimension.
The importation of data is at the top of the network and contains detailed information about objects drawn in the map, such as the number of merlons on each fortification wall, the shape of each tower’s top, the locations and sizes of houses, etc. Textual data stored in json files are accompanied by image cutouts of objects from the map. We used python to parse the data and write them into Houdini’s geometry spreadsheet as attributes, including the descriptive ones. It is now ready for further manipulation using the hundreds of nodes provided by the software. In this early stage of development, we manually created an approximate terrain and a ray node projects the data points onto the terrain, which marks the position of the annotated objects. Branching off from the ray node, we have made three types of objects: walls, houses, and towers. The network selects the corresponding data attributes and creates copies of them. Each copy also takes in parameters that define its size, 2D location, and other properties such as whether it’s a side facade or a front facade. With the z values previously provided by the ray node, these HDAs can be pushed back into the z-direction and scaled accordingly, so that the resulting view from camera position matches what we see on the map. For regular structures such as fortification walls, houses, and towers, procedural generation can simplify the process a lot once we find a pattern. However, ruined walls, fences, and other irregularly shaped structures would require more customized treatment. The current version uses a “Poly Extrude” node to create the thickness of the ruined walls.
A separate part of the network handles camera position and creates vectors that shoot from the camera onto the terrain. These vectors are used for fine-tuning the position of HDAs in case the ray was blocked by part of the terrain, or the object is so “thick” that it cuts into the terrain. Playing with the position of the HDAs manually could also potentially give insights to different ways of interpreting the layering of objects in the map.
The operations above could all be done automatically with auto-generation of nodes using python scripts. After a complete network is figured out and all functions implemented, our plan is to use python to generate nodes used in the network and further automate the process of turning 2D maps into 3D scenes.