Welcome to the second CORE.AI Newsletter!

The CORE.AI Newsletter is a quarterly update on the happenings within Thornton Tomasetti’s Artificial Intelligence and Machine Learning initiative.

CORE.AI is the artificial intelligence initiative led by Dr. Badri Hiriyur, Director of Artificial Intelligence & Machine Learning. CORE studio has been working on AI / ML technologies – with a specific focus on AEC applications – for some time now. We have consolidated these efforts, included an external CORE lab R&D project and brought in additional resources to launch a new initiative – CORE.AI.

 

Need to catch-up on the first newsletter? Read about it here.

Summer Data Science Interns

Summer 2019 started out on a busy and exciting note at CORE.AI! We had several new interns join the team for the summer and work on ongoing and new AI initiatives. Big shoutout to these hardworking interns and the accomplishments they made over the summer. Learn more about their backgrounds below.

  • Soheil is currently a PhD student at Lehigh and expecting to graduate sometime in 2020. His doctoral research is focused on using ML techniques for bridge system identification. The goal of his doctoral research is to quantify the dynamic properties of a bridge (stiffness, mass, damping etc.) purely based on accelerometer measurements obtained from mobile phones carried by people traveling across the bridge in their cars.
  • Sinchana is currently a master’s student at Rutgers specializing in information and data science. She has previously worked at the Bangalore office of Oracle and is adept at handling Big Data and extracting business and financial intelligence from it.
  • Taj is currently studying for his M.S. degree in data science at Indiana University Bloomington. Taj has previously worked on VR/AR and 3D holographic techniques to build a virtual anatomical model of the human body for pedagogical purposes. He has a good background in statistics and is strongly passionate about data analytics and machine learning.
  • Min has returned back to full-time internship at CORE.AI this summer. Min has recently graduated from Columbia with a B.S degree in civil engineering (Congratulations, Min!). Min worked on the T2D2 project since its inception has made some important contributions.
  • Tong joins CORE AI from Stanford, where is studying towards an M.S. degree in civil engineering. As part of his graduation project, Tong has used computer vision and machine learning techniques to detect and measure cracks in concrete structures. He has also developed computer vision techniques for structural health monitoring.

T2D2©: Thornton Tomasetti Damage Detector

Damage Mask Detection
Standard object detection networks such as Fast R-CNNFaster R-CNN, and YOLO are able to localize and classify objects in an image, but they only output a rectangular bounding box surrounding the object or characteristic of interest (in our case – structural damage). Using another model architecture called Mask R-CNN, the exact spatial region occupied by an object/characteristic can be predicted rather than just a rectangular boundary box. As an extension of T2D2©, the CORE.AI team has trained Mask R-CNN with our image database showing structural damage of various types.

 

Under the hood, this network has an extra branch for pixel-wise binary classification within an estimated boundary box, which is called a Mask. The benefits of using this model are clear, especially when there is a need to quantify or measure damage. However, training these models are harder because of the need to re-annotate our training images to include masks. Some examples of mask-detection of damage in brick masonry structures are shown in the following images.
Images showing damage such as cracks, spalls and open mortar joints (top); Mask detection of damage (bottom)
T2D2© Web Service
The mobile version of T2D2© is appropriate for field inspections where image data capture and damage detection are done simultaneously. However, there are many situations where field inspections are performed with standalone hand-held or drone-mounted cameras, where real-time damage detection capability is not available at the site. Is it possible to perform damage detection and markup of such photographs offline?

 

The answer is yes! This is where the T2D2© web service comes in. The CORE.AI team has deployed all the trained damage detector models on a web server and designed REST APIs to access them remotely. The service takes raw images (individual or a set) as an input and outputs the result of detection inference as original images with overlaid boxes showing damage and also optionally as a JSON file that encodes all the detection results. If a drone video is provided as an input, the server first extracts the frames at a specified rate, performs the damage detection on them and outputs a video that is recomposed from these frames that show overlaid damage. This web service is ideal for integration into third-party apps, web or mobile services. The CORE.AI team is currently building a web front end to be able to perform these steps directly through the browser.
Automated Damage Report Generation
Enabling automation is a key driver for many of CORE studio efforts. The CORE.AI team has developed a software that generates a damage report that includes a summary of all the damage found and providing thumbnails of individual images with annotated damage.
3D Photogrammetry + T2D2© Damage Detection
One of the objectives that the CORE.AI team is working towards, is to develop an automated workflow that allows a user to upload a drone image set or video and receive an interactive 3D model. By the click of a button, this 3D model will show damage annotations directly overlaid at actual scale. In order to achieve this objective, 3D photogrammetry is one of the crucial components of this workflow.

 

Various photogrammetry algorithms are available today (commercial as well as freeware), many of which need extra inputs such as depth measurements or point clouds. If a video is available instead of an image set, it can be decomposed into static image frames at a user-specified frame-rate ensuring sufficient overlap between consecutive frames. Image sets are processed to extract salient features that overlap across multiple frames and builds a 3D mesh of the structure and subsequently overlays texture and colors on to the model.

 

One of the byproducts of this process is a 2D → 3D map which translates 2D image coordinates to the 3D model coordinates. This information is valuable and useful to geo-locate detected damage (on the 2D image frames) on the 3D model of the inspected structure.
In collaboration with Thornton Tomasetti’s Renewal practice, T2D2©damage detection capabilities were tested on St. Paul’s School in Garden City, New York. St. Paul’s School is listed on the Endangered Historic Places in 2010 and the Renewal practice has been tasked with the inspection and restoration of the Facade.

 

Existing Facade at St. Paul’s school.

 

Using T2D2, detecting damage (spalls, cracks, vegetation growth, etc.).

 

Overlaid detection points on 3D model viewed through Mirar.

 

If you’re interested in using T2D2© on a project or curious about more information, contact us!

CORE.AI + Applied Science

What is Autoencoder?
The standard Autoencoder consists of two components that work in concert. The first one is the encoder – which takes a signal or image as input and represents it in a highly compact form through successive convolutional or fully-connected layers of decreasing size, and the second is the decoder, which takes the compact encoded signal and reconstructs the original signal, again through successive convolutional or fully connected layers, this time of increasing size.

 

Once the autoencoder is sufficiently trained using standard back-propagation methods such that the reconstruction loss is minimized over the entire training set, the compact encoded form, which is typically referred to as the bottleneck contains valuable information about the discriminants in the signal. If the autoencoder employs convolutional layers for the encoding portion and used on time-history signals, the encoding can be thought of as applying adaptive wavelet transforms – where the wavelet transformation is learned rather than specified a priori.
In this project, the autoencoder was used as a feature extractor to obtain a compact representation of the salient discriminants in a signal, which was subsequently used by standard classifiers to determine if a plate was damaged or not. Using a dataset of about 1500 signals, our model was able to achieve plate damage detection accuracies of over 95% on a per-signal basis and 100% on a per-plate basis based on majority voting of multiple signals.