CORE studio is very excited to announce a new project that we’ve been working on over the past few months.  It’s our latest stab at a dream we’ve been chasing for years now in a series of projects – Remote Solving, Platypus, Asterisk, Resthopper:


Swarm

Discover powerful design tools for your workflow,
skip the learning curve.

Swarm is an app store for the design community. It connects Designers who need tools with the Superusers who are building them.  Swarm allows design teams to discover and share solutions that empower creatives to do their best work.

In the AEC industry, the vast majority of Grasshopper definitions is produced by the minority of Rhino users – the Superusers – who typically find it difficult to share the tools they build with their colleagues.  CORE studio is developing Swarm to bridge the gap between the growing demand for bespoke parametric solutions and the limited access to the experts who can develop them.

Swarm brings the power of parametric design to a much broader audience of everyday users. On the one hand, it provides a market for computational designers to share their Grasshopper definitions as self-contained Swarm Apps that run on the cloud. On the other hand, a family of Swarm Plugins makes those Apps available within a wide range of platforms (we’re currently prototyping plugins for Revit, Illustrator, Grasshopper, and a stand-alone web client) outside of the Rhino ecosystem. 

Each Swarm App presents a user-friendly UI that hides the daunting complexities of the script’s inner workings behind a simple set of interactive widgets. Apps are available in the Swarm Marketplace, where designers from a variety of professional fields can both discover and publish solutions for their specific needs.

In addition to the marketplace and the desktop plugins, Swarm exposes an API that allows software developers to call into Swarm App endpoints from other desktop or web applications.  CORE studio is using the Swarm API as a development platform to help accelerate application prototyping and development for Thornton Tomasetti and our other clients (Konstru, Asterisk, and a number of other internal tools are all-consuming the Swarm API in various ways), and is actively seeking opportunities to partner with teams who can utilize Swarm’s ability to iteratively deploy and maintain scalable computational design microservices.

Swarm is currently being alpha tested in CORE studio and Thornton Tomasetti, and by a handful of testers outside of TT.  CORE studio has been collaborating with computational designers in multiple design industries – architecture, engineering, apparel, footwear, automotive – and superusers in AEC firms to inform the development process, and will continue to do so in pursuit of some still-open questions through the fall (mostly about the monies … we’re serious about the payment side of this project, but not quite ready to accept credit cards from users and issue payments to app developers).

We’re going to open up access to the alpha incrementally over the next couple of months, and are actively seeking partners who are interested in testing Swarm in a corporate environment. Please contact us! if you’d like to try the alpha once we open it up, or if you’d like to try Swarm in your offices.

And of course, stay tuned for more details!  We’ve been dutifully keeping this work quiet for months now, despite all of our enthusiasm about the idea.  Now that the gist is out there, we’ll do our best to flesh it out with more content over the next few weeks.

1 Comment

Welcome to the second CORE.AI Newsletter!

The CORE.AI Newsletter is a quarterly update on the happenings within Thornton Tomasetti’s Artificial Intelligence and Machine Learning initiative.

CORE.AI is the artificial intelligence initiative led by Dr. Badri Hiriyur, Director of Artificial Intelligence & Machine Learning. CORE studio has been working on AI / ML technologies – with a specific focus on AEC applications – for some time now. We have consolidated these efforts, included an external CORE lab R&D project and brought in additional resources to launch a new initiative – CORE.AI.

 

Need to catch-up on the first newsletter? Read about it here.

Summer Data Science Interns

Summer 2019 started out on a busy and exciting note at CORE.AI! We had several new interns join the team for the summer and work on ongoing and new AI initiatives. Big shoutout to these hardworking interns and the accomplishments they made over the summer. Learn more about their backgrounds below.

  • Soheil is currently a PhD student at Lehigh and expecting to graduate sometime in 2020. His doctoral research is focused on using ML techniques for bridge system identification. The goal of his doctoral research is to quantify the dynamic properties of a bridge (stiffness, mass, damping etc.) purely based on accelerometer measurements obtained from mobile phones carried by people traveling across the bridge in their cars.
  • Sinchana is currently a master’s student at Rutgers specializing in information and data science. She has previously worked at the Bangalore office of Oracle and is adept at handling Big Data and extracting business and financial intelligence from it.
  • Taj is currently studying for his M.S. degree in data science at Indiana University Bloomington. Taj has previously worked on VR/AR and 3D holographic techniques to build a virtual anatomical model of the human body for pedagogical purposes. He has a good background in statistics and is strongly passionate about data analytics and machine learning.
  • Min has returned back to full-time internship at CORE.AI this summer. Min has recently graduated from Columbia with a B.S degree in civil engineering (Congratulations, Min!). Min worked on the T2D2 project since its inception has made some important contributions.
  • Tong joins CORE AI from Stanford, where is studying towards an M.S. degree in civil engineering. As part of his graduation project, Tong has used computer vision and machine learning techniques to detect and measure cracks in concrete structures. He has also developed computer vision techniques for structural health monitoring.

T2D2©: Thornton Tomasetti Damage Detector

Damage Mask Detection
Standard object detection networks such as Fast R-CNNFaster R-CNN, and YOLO are able to localize and classify objects in an image, but they only output a rectangular bounding box surrounding the object or characteristic of interest (in our case – structural damage). Using another model architecture called Mask R-CNN, the exact spatial region occupied by an object/characteristic can be predicted rather than just a rectangular boundary box. As an extension of T2D2©, the CORE.AI team has trained Mask R-CNN with our image database showing structural damage of various types.

 

Under the hood, this network has an extra branch for pixel-wise binary classification within an estimated boundary box, which is called a Mask. The benefits of using this model are clear, especially when there is a need to quantify or measure damage. However, training these models are harder because of the need to re-annotate our training images to include masks. Some examples of mask-detection of damage in brick masonry structures are shown in the following images.
Images showing damage such as cracks, spalls and open mortar joints (top); Mask detection of damage (bottom)
T2D2© Web Service
The mobile version of T2D2© is appropriate for field inspections where image data capture and damage detection are done simultaneously. However, there are many situations where field inspections are performed with standalone hand-held or drone-mounted cameras, where real-time damage detection capability is not available at the site. Is it possible to perform damage detection and markup of such photographs offline?

 

The answer is yes! This is where the T2D2© web service comes in. The CORE.AI team has deployed all the trained damage detector models on a web server and designed REST APIs to access them remotely. The service takes raw images (individual or a set) as an input and outputs the result of detection inference as original images with overlaid boxes showing damage and also optionally as a JSON file that encodes all the detection results. If a drone video is provided as an input, the server first extracts the frames at a specified rate, performs the damage detection on them and outputs a video that is recomposed from these frames that show overlaid damage. This web service is ideal for integration into third-party apps, web or mobile services. The CORE.AI team is currently building a web front end to be able to perform these steps directly through the browser.
Automated Damage Report Generation
Enabling automation is a key driver for many of CORE studio efforts. The CORE.AI team has developed a software that generates a damage report that includes a summary of all the damage found and providing thumbnails of individual images with annotated damage.
3D Photogrammetry + T2D2© Damage Detection
One of the objectives that the CORE.AI team is working towards, is to develop an automated workflow that allows a user to upload a drone image set or video and receive an interactive 3D model. By the click of a button, this 3D model will show damage annotations directly overlaid at actual scale. In order to achieve this objective, 3D photogrammetry is one of the crucial components of this workflow.

 

Various photogrammetry algorithms are available today (commercial as well as freeware), many of which need extra inputs such as depth measurements or point clouds. If a video is available instead of an image set, it can be decomposed into static image frames at a user-specified frame-rate ensuring sufficient overlap between consecutive frames. Image sets are processed to extract salient features that overlap across multiple frames and builds a 3D mesh of the structure and subsequently overlays texture and colors on to the model.

 

One of the byproducts of this process is a 2D → 3D map which translates 2D image coordinates to the 3D model coordinates. This information is valuable and useful to geo-locate detected damage (on the 2D image frames) on the 3D model of the inspected structure.
In collaboration with Thornton Tomasetti’s Renewal practice, T2D2©damage detection capabilities were tested on St. Paul’s School in Garden City, New York. St. Paul’s School is listed on the Endangered Historic Places in 2010 and the Renewal practice has been tasked with the inspection and restoration of the Facade.

 

Existing Facade at St. Paul’s school.

 

Using T2D2, detecting damage (spalls, cracks, vegetation growth, etc.).

 

Overlaid detection points on 3D model viewed through Mirar.

 

If you’re interested in using T2D2© on a project or curious about more information, contact us!

CORE.AI + Applied Science

What is Autoencoder?
The standard Autoencoder consists of two components that work in concert. The first one is the encoder – which takes a signal or image as input and represents it in a highly compact form through successive convolutional or fully-connected layers of decreasing size, and the second is the decoder, which takes the compact encoded signal and reconstructs the original signal, again through successive convolutional or fully connected layers, this time of increasing size.

 

Once the autoencoder is sufficiently trained using standard back-propagation methods such that the reconstruction loss is minimized over the entire training set, the compact encoded form, which is typically referred to as the bottleneck contains valuable information about the discriminants in the signal. If the autoencoder employs convolutional layers for the encoding portion and used on time-history signals, the encoding can be thought of as applying adaptive wavelet transforms – where the wavelet transformation is learned rather than specified a priori.
In this project, the autoencoder was used as a feature extractor to obtain a compact representation of the salient discriminants in a signal, which was subsequently used by standard classifiers to determine if a plate was damaged or not. Using a dataset of about 1500 signals, our model was able to achieve plate damage detection accuracies of over 95% on a per-signal basis and 100% on a per-plate basis based on majority voting of multiple signals.

 

Welcome to the first CORE.AI Newsletter!

The CORE.AI Newsletter is a quarterly update on the happenings within Thornton Tomasetti’s Artificial Intelligence and Machine Learning initiative.

Introducing CORE.AI

CORE studio is excited to welcome Dr. Badri Hiriyur to the team as our incoming Director of Artificial Intelligence & Machine Learning and will be leading the AI initiative.

Artificial Intelligence (AI) and Machine Learning (ML) are popular buzzwords these days because they describe a family of algorithms that have enabled various cool technological achievements such as robots, self-driving cars, Alexa, and AlphaGO. CORE Studio has also been working on AI / ML technologies – with a specific focus on AEC applications – for some time now. We have consolidated these efforts, included an external CORE lab R&D project and brought in additional resources to launch a new initiative – CORE.AI.

CORE Studio welcomes Dr. Badri Hiriyur to join its ranks in his new role as Director of Artificial Intelligence & Machine Learning. Dr. Hiriyur comes to the studio from the Applied Science practice of Thornton Tomasetti, where he spent many years developing software – used by the US Navy – for high-performance computational fluid dynamics. Dr. Hiriyur also has a strong background in machine learning, deep learning, and robotics and is the creator of T2D2. Dr. Hiriyur has a masters degree (2003) from Johns Hopkins and a Ph.D. (2012) from Columbia University.

 

What IS AI/ML?

Artificial Intelligence and Machine Learning are arguably the hottest buzzwords in the world right now and most people have no idea what these words mean (p.s. robots are not taking over the world). Artificial Intelligence / Machine Learning is a family of algorithms that enable machines (computers, robots, cars, etc.) to process various kinds of information (visual, audio, data) and make intelligent decisions that help them optimally achieve predefined objectives. In contrast to traditional software programs, which also process information and enable decisionmaking, these algorithms learn to make their decisions by looking at prior data/decisions (supervised learning) instead of being fed explicit rules that govern their decision-making process.

For example, ML algorithms can learn to map various combinations of user-defined parameters (commonly termed features) to a set of output labels (classification) or to numerical output values (regression). ML algorithms also help gain insights into groupings of data (unsupervised learning) and self-improvement over time based on a policy of rewards and penalties associated with its decisions (reinforcement learning).

Deep Learning

Neural networks are a class of machine learning algorithms that use programmatic circuits connecting a series of input parameters (or input data) to output quantities through one or more layers of nodes. Each of these nodes performs a weighted combination of its inputs, transforms the result with a function (activation) and passes it on to the next node. The key to neural networks mapping inputs to reasonable outputs lie in finding the appropriate connection weights. During the supervised learning process, a technique called loss back-propagation is used.

Gill, J. K. (2017, July 21). Automatic Log Analysis using Deep learning and AI. Retrieved March 19, 2019, from https://w w w .xenonstack.com/blog/log-analytics-deep-machine-learning-ai/

Among the many types of machine learning algorithms, deep neural networks (with many layers of hidden nodes between the input and output quantities) have gained increasing popularity in recent times because of the performance gains they made on many classes of problems (e.g. in computer vision, natural language processing etc) that were once considered hard for algorithms to crack. Another reason for their popularity is because they have the ability to work with raw data (e.g. pixels, audio wave-forms) and automatically identify the relevant features that matter, instead of having to pre-process data to extract important features.


CORE.AI Projects

T2D2©: Thornton Tomasetti Damage Detector

What is it?
T2D2 is an application that processes images or video feed from structural inspections to automatically detect visible damage and defects. It includes a geo-localization module which maps detected damage to associated locations on the structure. T2D2 is deployed in two forms: T2D2 Mobile is an app on a mobile phone or tablet, which packages the TensorFlow inference models and enables hand-held inspections on-site. T2D2 Web provides an online inference service that processes camera image feeds and returns JSON/XML metadata describing all the corresponding damage detections. This can be used for drone-based inspections when combined with the location, altitude and orientation data obtained from drone localization and presented to users on a dashboard.

What’s under the hood?
T2D2 uses deep convolution neural networks for classification and bounding box detection which are implemented in TensorFlow – a framework for deep learning developed by Google. These neural networks are trained on a vast database of annotated images collected from prior structural and facade inspections.

Development Status
The T2D2 dev team is currently working on expanding the training data sets to cover various structure and damage types, in addition to polishing up the user interfaces to enable a smooth inspection workflow. A beta release of T2D2 Mobile is expected to be rolled out to select users within Thornton Tomasetti near the end of Summer 2019 and a pilot inspection study using the T2D2 web service is expected later in the year.

Application Areas
T2D2 has broad applications across the Thornton Tomasetti practice portfolio and can be deployed to enhance inspection efficiency for various types of structures, such as buildings, bridges, tunnels, nuclear reactors, and petrochemical facilities.


Asterisk (alpha)

What is it?
Asterisk is a web application for rapid structural optioneering in early conceptual design. Simply upload a mass and core model from the Rhino client and adjust a few parameters, and Asterisk returns a concept-level structural design in a matter of seconds. Users can create iterations, explore and filter design spaces, and compare the performance metrics of select sets. Asterisk is a collaborative platform in which project teams can create, review and present design spaces across multiple mediums.

The History
CORE studio began researching Machine Learning in 2015 and published the first white paper at IASS 2015. Click the link below to read it.

Performance Measures from Architectural Massing Using Machine Learning

With Asterisk, You Can:

  • Iterate through a series of user-defined parameters, such as program, bay spacing, and material, to build up design space. Upload masses from Rhino, iterate and download wire-frames back into a modeling workflow.
  • Explore multiple structural iterations in filterable design space with the integrated Design Explorer interface. Set limits to results, like weight and cost or bay spacing and floor-to-floor height inputs, and get back the iterations that meet project criteria as they evolve.
  • Compare user-selected sets of iterations to better understand their relative performance. See trade-offs between options in a comparative matrix that visually highlights the top-performing metrics in each category.

What’s under the hood?
Asterisk is built from a foundation of CORE platforms, including Spectacles and Design Explorer, CORE studio’s web application suite. Asterisk achieves its speed by leveraging CORE studio’s predictive models. These models have been trained on data generated to Thornton Tomasetti design standards and pulled from the expertise of our engineers and from their workflows.

Asterisk is the first platform to apply the ongoing research at Thornton Tomasetti to machine learning and its implementation in AEC.


CORE studio + Applied Science

The Applied Science practice of Thornton Tomasetti has a long history of using a variety of machine learning algorithms on its projects to provide solutions to clients in the defense, life sciences, and energy sectors. CORE Studio supports these projects through the continued involvement of Dr. Badri Hiriyur in addition to providing development resources for related web and desktop applications.

 

 Want to get in touch? Curious to know more? REACH OUT!

On October 25 – 28, CORE studio hosted AEC Tech 2018, the sixth annual event, in New York City! The theme for this years event was Interpolations, focusing on the injection of innovative processes, technologies, and workflows that are rerouting the way the industry works, designs, and thinks.

The event kicked off with workshops on Thursday, the symposium was held Friday, and the infamous hackathon took place on Saturday and Sunday. All three events sold out with the symposium and workshops having the largest attendance to date with 175 and 100 people respectively, the hackathon sold out at 80 hackers! Attendees came from all over the country and world including the United States, Canada, Japan, Spain, United Arab Emirates, Great Britain, and Australia. The event had a record-breaking 70 different AEC companies represented as well!  

On Thursday, October 25, the full day workshops were held at Baruch College and allowed attendees to develop on their current skill sets, explore new tools and workflows, and provided inspiration for hackathon projects. Attendees had the opportunity to take a variety of classes including Lunchbox ML with Nathan Miller, Rhino Compute with Dale Fugier, Building Interoperability with Hiram Rodriguez, BIM Interoperability with Nick Mundell, Intro to Web API’s with Gui Talarico, Honeybee THERM with Mostapha Sadeghipour Roudsari and Chris Mackey, and Structural Optimization with Daniel Segraves and Emil Poulsen. Further details on the workshops can be read here.

The symposium took place on Friday, October 26 at Baruch College in Engelman Hall and we welcome speakers from a wide range of professional backgrounds to speak on their interpretation of the theme Interpolations. Speakers included: Brian Gillespie, Margaret Wang, Serena Li, Michael Leach, Andrew Heumann, Gui Talarico, Nathan Miller, James Coleman, Toru Hasegawa, Onur Yuce Gun, and the event concluded with our first-ever roundtable discussion featuring Shane Scranton, Don Henrich, Jesse Devitte, and Mostapha Sadeghipour Roudsari. The full symposium schedule, description of talks, and speaker bios can be read here.

On Saturday, October 27 hackers, programmers, and tech folks came together at the Thornton Tomasetti office for a 26-hour hackathon. The hackathon resulted in 9 new open-source projects, all presentations and GitHub links can be found here.

The success of the 2018 events is in part due to our workshop, symposium, and hackathon sponsors: Autodesk, Konstru, wework, Ladybug, Building Ventures, Proving Ground, and Robert McNeel & Associates.

We want to thank everyone for attending the events, participating with open minds, promoting the event, and maintaining the infectious energy that keeps us all coming back every year! If you have any suggestions of what you would like to see next year, please email us and let us know!

Stay tuned, we will be posting videos from the symposium presentations in the coming weeks! Please enjoy a few symposium photos below and you can find more here.

Photos courtesy of Jon Taylor Photography

CORE studio hosted the 2017 AEC Technology Symposium and Hackathon in New York City on October 19-22. It was our largest AEC Tech event to date with over 150 people attending the symposium, and over 100 people attending the hackathon. Attendees came from all over the world including: New York, Philadelphia, Boston, Washington D.C., Dallas, Seattle, San Francisco, London, Switzerland, Tokyo, and Egypt. A variety of AEC companies was represented including: SHoP Architects, Perkins + Will, Tocci Construction, HOK, Affiliated Engineers, Radius Track, Dar Group, Foster and Partners, NBBJ Architecture, Ennead Architects, Permasteelisa, Skanska, Rossetti, McNeel, Autodesk, wework, and more.

The week of events started off with a full day of supplementary workshops at the Thornton Tomasetti office which allowed attendees to develop on their current skill sets and explore new tools and workflows. The workshops sold out at over 70 attendees and all sessions were completely full. Attendees had the opportunity to take a variety of classes including D3.js and Visualization, Dynamo and Revit, Konstru, Kangaroo Physics, Human UI, Autodesk Forge, and Speckle. Workshops were hosted by Mostapha Roudsari, Konrad Sobon, Chris Mackey, Nick Mundell, Emil Poulsen, Brian Ringley, Andrew Heumann, Jaime Rosales Duque, and Dimitrie Stefanescu.

For the first time in AEC Tech history, CORE studio introduced a theme for the symposium, “Merging and Emerging: AEC Futures”. The theme investigated opportunities in implementing cross-industry tech and discuss emerging AEC technology through three vectors: Site Futures (Robotics, Sensing, AR and Mixed Reality), Practice Futures (Cross-disciplinary collaboration, Machine Learning and AI), and Facilities Futures (Integration of sensing and data logging for the Built Environment).

We welcomed speakers from a wide variety of backgrounds such as academia, kinetic art, virtual reality, custom software, machine intelligence, engineering, and research and development. Speakers included: Jonatan Schumacher, Ben Howes, Brian Ringley, Andrew Heumann, Cecilie Brandt-Olsen, Branko Glisic, Sarah Oppenheimer, Naomi Maki, Norman Richardson, Ana Garcia Puyol, Randy Deutsch, and Friederike Schuur. The full symposium schedule, description of talks, and speaker bios can be read here.

AEC Tech 2017 was the first hackathon on Cornell Tech’s brand new NYC campus, lasting 30-hours and resulting in 16 new open-source projects, all attempting to tackle pressing issues in the AEC industry with innovative computational solutions. A conclusive list of final projects and links can be found here. (Read more about the Hackathon)

The success of the 2017 events are in part due to our workshop, symposium, and hackathon sponsors: Autodesk, Autodesk Forge, HKS Line, IrisVR, Konstru, wework, Bad Monkeys, Ladybug, and Speckle.

We want to thank everyone for attending the events, participating with open minds, promoting the event, and maintaining the infectious energy that keeps us all coming back every year! If you have any suggestions of what you would like to see next year, please email us and let us know!

Stay tuned, we will be posting videos from the symposium presentations in the coming weeks! Please enjoy a few symposium photos below and you can find more here.

 

Photos courtesy Jon Taylor and Ben Howes

CORE studio hosted the 2017 AEC Technology Symposium and Hackathon in New York City last week.  It was our largest event to date with over 150 people attending the symposium, and over 100 people attending the hackathon.  16 new projects (most of which are open source) were produced at the hackathon, all attempting to tackle pressing issues in the AEC industry with innovative computational solutions.  

The symposium and hackathon were hosted in The Bridge at Cornell Tech.  The Bridge is a brand new building on Roosevelt Island that CORE studio had the honor of working on with Thornton Tomasetti’s Structural Engineering Practice, who provided engineering services for Weiss Manfredi Architects throughout the design and construction process.  AEC Tech 2017 was the first hackathon on Cornell Tech’s brand new NYC campus.

We’ll be following with another post focusing on the symposium ( including videos of the presentations and more photographs! ), but we’d like to share a few photos from the hackathon, and link out to all of the projects that were developed over the weekend.  As with previous years, the energy from the hackathon has been hard to shake this week … CORE studio is still abuzz from all of the incredible ideas and discussions, and the ferocious hacking sessions that went down over the weekend.  Our most heartfelt thanks go out to everyone who participated!

 

So, without further ado, here are the projects!

 

The hackathon winners:

  1. Duct People – https://github.com/RevitAirflowDesigner/RevitAirflowDesigner
  2. Revilations – https://github.com/mkero-tt/Revilations
  3. Airflow Network Visualizer – https://aectechhack2017.devpost.com/submissions/79989-airflownetowrk_visualizer

 

And all the rest, in alphabetical order by project name:

 

And finally, some photos — Enjoy!

1 Comment

Over the past few months, Thornton Tomasetti’s Sustainability practice has been updating Design Explorer as part of an ongoing collaboration with CORE studio.  The Sustainability practice frequently uses Design Explorer to analyze and present parametric energy and daylighting analysis models to clients and project stakeholders.

All of this practical use of the tech makes them perfectly suited to steer Design Explorer based on their needs, which is exactly what they’ve been up to lately.  The other driving force behind some of these changes is Colibri, another open source project we’ve been collaborating with the Sustainability team on since the 2016 AEC Technology Hackathon.

The changes listed below are now up and running over at http://tt-acm.github.io/DesignExplorer/

1. Microsoft OneDrive Support

We added support for Microsoft OneDrive, by simply copying the link of a OneDrive folder. Currently, Design Explorer can support data hosted on Google Drive, Microsoft OneDrive, or on your own private server.

Image1.1

Image1

2. Customize Parallel Coordinates plot scaling

By default, Design Explorer uses the minimum and maximum values in each dimension to scale the corresponding vertical axis in the parallel coordinates plot.  Based on user feedback, we added the ability to scale each vertical axis in the chart using custom values, allowing users more control over how their data is laid out.  A good example is percentages – by default, the min and max might not be 0% and 100%, which is confusing for some viewers.  Now, this is able to be set by the author.

Image2

Image 1.3

Image1.4

Note: to load the setting with your data, please save and keep the “setting.json” file with your “data.csv”.

3. Added iteration attributes panel

Another new feature based on user feedback.  Users want to know about the currently selected iteration.  Previously this data was only available by hovering over an image, which was cumbersome (and impossible in 3D mode).  The attributes panel presents metadata about the currently selected design iteration.

Image6

4. Multiple views for each iteration

Previously, only one image per design iteration was supported.  Now users can add as many images as they want for each iteration, and cycle through them in Design Explorer.  The latest release of Colibri makes setting this up super easy in Grasshopper (use the image setting component to capture existing Rhino views), but it can also be done manually in the data.csv file.

Image7

  • Add another image column.
  • Name it whatever makes sense to you, such as “img_plan”, “imgEastElevation”, “img_Perspective”, or “img_02”.
  • Navigation in Design Explorer: click the middle of image.

Image 1.5

 

Note: Design Explorer will load the first image as default thumbnail.

5. Shorter static link for sharing

Our static links were too long!  Now all static links are being shortened automatically.

Image 1.9

6. Text as data is now fully supported

Another update that plays very nicely with Colibri.  After adding support for dropdowns and panels to the Colibri Iterator, it became very clear that text values had to be treated as first class data points (previously only numbers worked well in both the PCP and when using the Sliders).  Text values are now fully supported, and are sorted alphabetically along the dimension axes.

Image10

7. General performance improvements

We removed some redundant code and improved general performance across the board.

Thanks again to the Sustainability crew working on this for all of the great enhancements over the past few months!  As always folks, happy Design Exploring.  Watch out for all of those extra dimensions out there.  If you have any comments or suggestions about Design Explorer or would like to get involved with the development, please do get in touch!

Written by: Benjamin Howes

CORE studio is pleased to announce the release of TT Toolbox version 1.9, our suite of free and open source plugins for Grasshopper, which is available on Food4Rhino.

Similar to our January 2017 release, the main attraction in this release is Colibri. Many new features have been added to make it easier to turn your Grasshopper definition into a Design Explorer – compatible design space. The big Colibri updates are:

  • Iterator component improved – transitioned to dynamic inputs, sliders / drop-downs / panels all supported as inputs, context menu to specify how to deal with warnings, selection broken out into separate component, ‘Fly’ button included as part of component UI – no more need for a button.
  • Selection component added – allows for more control over partial design space runs; users can specify granularity along any input vector, or can choose to run only a part of the design space (only run the first 100 iterations, for example), or both.
  • Parameter component improved – transitioned to dynamic inputs, generalized so that the component can be used to describe either design inputs or design performance data for the Aggregator to consume.
  • Image Setting component improved – multiple views supported, ability to provide custom file names added (for experts only!).
  • Aggregator component improved – context menu added to specify when / if data should be overwritten, defense added to check for Write? == true when Colibri is told to fly, inputs naming revised to distinguish between design inputs (Genome) and design performance data (Phenome), 3DObjects input added allowing for super-simple Spectacles data generation.
  • Spectacles Integration – The Spectacles exporter was extended to make it easier to work with Colibri; now all Spectacles data can be compiled by the Aggregator by funneling it through the Spectacles_Colibri component.

Also noteworthy: sometime in the past few months we crossed the 20,000 download mark on Food4Rhino!  Thanks to all of the Grasshopper users who have downloaded TT Toolbox for their interest and curiosity, and above all for their feedback and encouragement.  Keep up the good work, folks!  We’ll try to do the same ;]

Explore CORE Studio’s latest contribution to DynamoRevit. We added a lot of new nodes which are partly already released with Dynamo 1.2 and will fully be available with the next Dynamo release. There are a lot of new nodes for creating and manipulating annotations, accessing element properties, manipulating existing locations and entirely new features like creating family types from dynamo geometries or creating global parameters. We added a list containing all new nodes below:

Annotations

  • Draw and query detail curves by dynamo curves
  • Create dimensions by elements or manipulate dimension properties
  • Make filled regions by outline and access filled region types
  • Place revision clouds by outline curves like polygons
  • Tag any element in Revit directly through Dynamo
  • Place text notes in views, access their properties and text note types
pic1

Tag Walls and place Dimensions in Dynamo

 

Material

  • Access Material properties like Name, Shininess, Smoothness, Transparency, SurfacePatternColor, MaterialClass, MaterialCategory, CutPatternColor, Color, CutPatternId, AppearanceParameters, ThermalParameters, StructuralParameters

Selection

  • Select multiple edges from elements

Elements

  • Create rooms by location point
  • Get room boundaries as curves
  • Create reference planes by points
  • Get location point or curve and manipulate them with the set location node
  • Move elements by vector
  • Access element materials
  • Create new revisions using dynamo
  • Create wall by face
  • Access Revit’s shape editor for roofs and floors
  • Create curtain systems by face
pic2

Create Wall by Face in Dynamo

 

Families

  • Access properties like host element of family and type
  • Create new family types by dynamo geometry
pic3

Create new Family types from Dynamo geometry

Document

  • GetSurveyPoint and
  • GetBasePoint give you direct access to the document’s coordinates

Parameters

  • Access parameter properties like: HasValue, IsReadOnly, IsShared, Group, ParameterType, Id, UnitType, ParameterByName, SetValue, StorageType, SharedParameterFile,
  • Create shared or project parameters
  • Create global parameters (from Revit 2017)
  • Access global parameter properties

Filter

  • Create Parameter Filters by filter rules
  • Create Filter Rules by Rule Types
  • Create new Override graphic settings

Performance Adviser

  • Run Revit’s performance adviser directly in dynamo and explore failure messages

UI Nodes

  • And several UI nodes to access elements like: Phase, Revision, FilledRegionType, RevisionNumbering, RevisionNumberType, ParameterType, BuiltInParameterGroup, RevisionVisibility, DirectShapeRoomBoundingOption, FilterType, HorizontalAlignment, VerticalAlignment, RuleType

Written by: Max Thumfart

1 Comment

toolbox

CORE studio has released an updated version of TT Toolbox, our suite of free and open source plugins for Grasshopper on Food4Rhino.

The main attraction in this release is a new plugin called Colibri. Colibri makes it super easy to turn your Grasshopper definition into a Design Explorer – compatible design space. We wrote a dedicated blog post about it last week if you’d like to learn more.

Besides adding Colibri, version 1.8 of TT Toolbox includes:

  • Extended support for Platypus until 2018.
  • Added colored mesh support to the Spectacles
  • Minor stability enhancements to the Excel writer.
  • A few other minor bug fixes.

Head on over to Food4Rhino and get it while it’s hot! And as always if you find any bugs or have ideas for improving the tools, please do let us know on Food4Rhino, in our Grasshopper group, or on Github.

Written by: Benjamin Howes