How does NASA use AI for Space Exploration?

How does NASA use AI for Space Exploration?

In this article, let us see about How does NASA uses AI for Space Exploration? and build an ML model that can Analyse images of moon🌚 rocks...

Data plays a huge role in almost every scientific area of study, in today's world more than ever. we now have massive amounts of data that are often publicly available.

Artemis:

Artemis is NASA's new lunar exploration program. It's all lunar activities and includes a mission to land the first woman and the next man on the Moon by 2024. The goal of the program is to begin to understand what infrastructure and preparation are needed to eventually send an astronaut to set foot on Mars.

NASA's experiments will require collaboration between:

  • The experts working on Earth
  • The astronauts on the Moon

  • The machines that are programmed on Earth and executed on the Moon.

Communication is difficult between Earth and the Moon. And it's not possible to "just go back for another quick look". So the preparation begins years in advance to maximize the chances of success in discoveries.

The application of AI that we focus on here is how to use a computer to classify space rocks.

ok, you may ask Why are they important?

Why does NASA keep sending rockets to the Moon to collect space rocks?"

Becoz,

rocks tell us about the history of our Solar System as they record geological events like the eruption of a volcano. Space rocks have been here much longer than humans, and they'll be here long after we're gone.

nasa1.jpg

If we integrate AI into the process of analyzing space rocks, we can improve the collection process for both humans and rovers. We could send astronauts to the Moon with a computer that can take photos of rocks.

The computer could show the astronaut the rock type. The astronaut could determine if that type of rock is needed in the collection and decide to pick it up or leave it.

In a future mission, the computer could be placed in a rover that autonomously drives across the surface of the Moon and scans for rocks that we need for research.

OK, now let's build a Machine Learning model that can Analyse images of moon🌚 rocks...

Are you excited? let's start...

prerequisites:

  1. Basic knowledge of Machine Learning.
  2. Basic Knowledge of working with TensorFlow, PyTorch.

If you are comfortable with the above prerequisites, Lets Code😎

We will be using TensorFlow !.

import matplotlib.pyplot as plt
import numpy as np

import torch
from torch import nn, optim
from torch.autograd import Variable
import torch.nn.functional as F

torchvision, which is part of PyTorch. You use this library to process images and do manipulations like cropping and resizing. Add this code in a new cell to import the library, and then run the cell.

import torchvision
from torchvision import datasets, transforms, models

Now add code in a new cell to import the Python Imaging Library (PIL) so you can visualize the images. After you add the new code, run the cell.

from PIL import Image

Finally, add the following code in a new cell to import two libraries that ensure the plots are shown inline and with high resolution. After you add the new code, run the cell.

%matplotlib inline
%config InlineBackend.figure_format = 'retina'

We'll use code to accomplish these four steps to prepare our data:

  1. Get the data: Tell the computer where to get the image data.

  2. Clean the data: Crop the images to the same size.

  3. Separate the data: Separate the data by shuffling and random selection.
  4. Load random datasets: Prepare random samples for training and testing datasets.

Add the following code in a new cell to import the Python Imaging Library (PIL). We'll use this library to visualize the images. After you add the new code, run the cell.

Go to this Azure Blob storage and download the Data.zip folder.

# Tell the machine what folder contains the image data
data_dir = './Data'

# Read the data, crop and resize the images, split data into two groups: test and train
def load_split_train_test(data_dir, valid_size = .2):

    # Transform the images to train the model
    train_transforms = transforms.Compose([
                                       transforms.RandomResizedCrop(224),
                                       transforms.Resize(224),
                                       transforms.ToTensor(),
                                       ])

    # Transform the images to test the model
    test_transforms = transforms.Compose([transforms.RandomResizedCrop(224),
                                          transforms.Resize(224),
                                          transforms.ToTensor(),
                                      ])

    # Create two variables for the folders with the training and testing images
    train_data = datasets.ImageFolder(data_dir, transform=train_transforms)
    test_data = datasets.ImageFolder(data_dir, transform=test_transforms)

    # Get the number of images in the training folder
    num_train = len(train_data)

    # Create a list of numbers from 0 to the number of training images - 1
    # Example: For 10 images, the variable is the list [0,1,2,3,4,5,6,7,8,9]
    indices = list(range(num_train))

    # If valid_size is .2, find the index of the image that represents 20% of the data
    # If there are 10 images, a split would result in 2
    # split = int(np.floor(.2 * 10)) -> int(np.floor(2)) -> int(2) -> 2
    split = int(np.floor(valid_size * num_train))

    # Randomly shuffle the indices
    # For 10 images, an example would be that indices is now the list [2,5,4,6,7,1,3,0,9,8]
    np.random.shuffle(indices)

    from torch.utils.data.sampler import SubsetRandomSampler

    # With the indices randomly shuffled, 
    # grab the first 20% of the shuffled indices, and store them in the training index list
    # grab the remainder of the shuffled indices, and store them in the testing index list
    # Given our example so far, this would result is:
    # train_idx is the list [1,5] 
    # test_idx is the list [4,6,7,1,3,0,9,8]
    train_idx, test_idx = indices[split:], indices[:split]

    # Create samplers to randomly grab items from the training and testing indices lists
    train_sampler = SubsetRandomSampler(train_idx)
    test_sampler = SubsetRandomSampler(test_idx)

    # Create loaders to load 16 images from the train and test data folders
    # Images are chosen based on the shuffled index lists and by using the samplers
    trainloader = torch.utils.data.DataLoader(train_data, sampler=train_sampler, batch_size=16)
    testloader = torch.utils.data.DataLoader(test_data, sampler=test_sampler, batch_size=16)

    # Return the loaders so you can grab images randomly from the training and testing data folders
    return trainloader, testloader

# Using the function that shuffles images,
# create a trainloader to load 20% of the images
# create a testloader to load 80% of the images
trainloader, testloader = load_split_train_test(data_dir, .2)

# Print the type of rocks that are included in the trainloader
print(trainloader.dataset.classes)

Add code to transform and select random images:

#Transform an image into pixels and resize it
test_transforms = transforms.Compose([transforms.RandomResizedCrop(224),
                                   transforms.Resize(224),
                                   transforms.ToTensor(),
                                 ])

# Randomly select a set of images by using a similar approach as the load_split_train_test function
def get_random_images(num):
    data = datasets.ImageFolder(data_dir, transform=test_transforms)
    classes = data.classes
    indices = list(range(len(data)))
    np.random.shuffle(indices)
    idx = indices[:num]
    from torch.utils.data.sampler import SubsetRandomSampler
    sampler = SubsetRandomSampler(idx)
    loader = torch.utils.data.DataLoader(data, sampler=sampler, batch_size=num)

    # Create an iterator to iterate over the shuffled images in the test image dataset
    dataiter = iter(loader)

    # Get and return the images and labels from the iterator
    images, labels = dataiter.next()
    return images, labels

Add code to show randomly selected images:

# Show five images - you can change this number
images, labels = get_random_images(5)

# Convert the array of pixels to an image
to_pil = transforms.ToPILImage()
fig=plt.figure(figsize=(20,20))

# Get a list of all classes in the training data
classes=trainloader.dataset.classes

# Draw the images in a plot to display in the notebook
for ii in range(len(images)):
    image = to_pil(images[ii])
    sub = fig.add_subplot(1, len(images), ii+1)
    plt.axis('off')
    plt.imshow(image)

# Display all of the images 
plt.show()

Now a window should pop up on the editor showing you the Amazing results, what NASA scientists and engineers will be working on.

Outro:

To the end...

Yeah, this article is long but I hope it is interesting.

Hope it's helpful in knowing how to get the data, customize it, and build a ML model.

If you are much more interested in this stuff, create new ideas and come with a new ML model which is Impressive and useful.

Research a lot and have fun. Thank you for reaching till end 😊.

Happy Learning

-JHA

Â