A.R.A.I.A

A.R.A.I.A. (Augmented Reality + Artificial Intelligence for Immersive Artwork) is an experimental app designed to bring static physical artwork to life through the use of Augmented Reality (AR) and AI-generated content. The project explores how real-time LiDAR scanning, camera tracking, and AI-driven visual and interactive elements can transform the experience of engaging with visual art.

Position Role

Lead Developer, AR/AI Integrator

Software Used

RealityKit, SwiftUI, LiDAR scanning (ARKit), OpenAI API, DALL·E,Runway AI, CoreML / ARKit

PROJECT INFO

Year
03
2025
Project Overview

The goal of A.R.A.I.A. was to prototype an app that combines AR and AI technologies to create an immersive layer of interactivity on top of physical artworks. The core concept was simple but ambitious: allow a viewer to point their device at an artwork and watch it “come alive” through dynamic visual overlays, movement, and AI-generated content that responds to both the artwork and user interactions. The app was built using RealityKit and SwiftUI, leveraging LiDAR scanning and camera tracking to accurately detect artwork in physical space and anchor AR content in real time. To generate dynamic visuals, I integrated OpenAI and DALL·E for custom image creation, and Runway AI for additional generative video and animation effects. One of the key creative challenges was to design interactions that felt artistically meaningful rather than gimmicky. The goal was not simply to add flashy effects, but to deepen the viewer’s connection with the original artwork through complementary AI-driven layers that could react to visual cues or narrative prompts.

Role & Responsibilities

I served as the concept creator, lead developer, and technical integrator for the A.R.A.I.A. project. I was responsible for defining the creative vision, building the AR app prototype in SwiftUI with RealityKit, integrating AI models (DALL·E, Runway AI, and OpenAI APIs), and designing the real-time camera tracking and interaction logic using LiDAR and ARKit.

Workflow

The project began with conceptual development, where I mapped out use cases for AI-enhanced AR interactions with static artwork. I then built the app prototype using SwiftUI and RealityKit, focusing first on robust camera tracking and LiDAR-based spatial mapping. I integrated OpenAI and DALL·E APIs to dynamically generate image overlays and used Runway AI to process video and animation content that could be composited in the AR space. The final phase involved testing the app with real-world artworks and refining interaction models to ensure that the augmented elements meaningfully enhanced the original pieces.

A.R.A.I.A

A.R.A.I.A. (Augmented Reality + Artificial Intelligence for Immersive Artwork) is an experimental app designed to bring static physical artwork to life through the use of Augmented Reality (AR) and AI-generated content. The project explores how real-time LiDAR scanning, camera tracking, and AI-driven visual and interactive elements can transform the experience of engaging with visual art.

Position Role

Lead Developer, AR/AI Integrator

Software Used

RealityKit, SwiftUI, LiDAR scanning (ARKit), OpenAI API, DALL·E,Runway AI, CoreML / ARKit

PROJECT INFO

Year
03
2025
Project Overview

The goal of A.R.A.I.A. was to prototype an app that combines AR and AI technologies to create an immersive layer of interactivity on top of physical artworks. The core concept was simple but ambitious: allow a viewer to point their device at an artwork and watch it “come alive” through dynamic visual overlays, movement, and AI-generated content that responds to both the artwork and user interactions. The app was built using RealityKit and SwiftUI, leveraging LiDAR scanning and camera tracking to accurately detect artwork in physical space and anchor AR content in real time. To generate dynamic visuals, I integrated OpenAI and DALL·E for custom image creation, and Runway AI for additional generative video and animation effects. One of the key creative challenges was to design interactions that felt artistically meaningful rather than gimmicky. The goal was not simply to add flashy effects, but to deepen the viewer’s connection with the original artwork through complementary AI-driven layers that could react to visual cues or narrative prompts.

Role & Responsibilities

I served as the concept creator, lead developer, and technical integrator for the A.R.A.I.A. project. I was responsible for defining the creative vision, building the AR app prototype in SwiftUI with RealityKit, integrating AI models (DALL·E, Runway AI, and OpenAI APIs), and designing the real-time camera tracking and interaction logic using LiDAR and ARKit.

Workflow

The project began with conceptual development, where I mapped out use cases for AI-enhanced AR interactions with static artwork. I then built the app prototype using SwiftUI and RealityKit, focusing first on robust camera tracking and LiDAR-based spatial mapping. I integrated OpenAI and DALL·E APIs to dynamically generate image overlays and used Runway AI to process video and animation content that could be composited in the AR space. The final phase involved testing the app with real-world artworks and refining interaction models to ensure that the augmented elements meaningfully enhanced the original pieces.

A.R.A.I.A

A.R.A.I.A. (Augmented Reality + Artificial Intelligence for Immersive Artwork) is an experimental app designed to bring static physical artwork to life through the use of Augmented Reality (AR) and AI-generated content. The project explores how real-time LiDAR scanning, camera tracking, and AI-driven visual and interactive elements can transform the experience of engaging with visual art.

Position Role

Lead Developer, AR/AI Integrator

Software Used

RealityKit, SwiftUI, LiDAR scanning (ARKit), OpenAI API, DALL·E,Runway AI, CoreML / ARKit

PROJECT INFO

Year
03
2025
Project Overview

The goal of A.R.A.I.A. was to prototype an app that combines AR and AI technologies to create an immersive layer of interactivity on top of physical artworks. The core concept was simple but ambitious: allow a viewer to point their device at an artwork and watch it “come alive” through dynamic visual overlays, movement, and AI-generated content that responds to both the artwork and user interactions. The app was built using RealityKit and SwiftUI, leveraging LiDAR scanning and camera tracking to accurately detect artwork in physical space and anchor AR content in real time. To generate dynamic visuals, I integrated OpenAI and DALL·E for custom image creation, and Runway AI for additional generative video and animation effects. One of the key creative challenges was to design interactions that felt artistically meaningful rather than gimmicky. The goal was not simply to add flashy effects, but to deepen the viewer’s connection with the original artwork through complementary AI-driven layers that could react to visual cues or narrative prompts.

Role & Responsibilities

I served as the concept creator, lead developer, and technical integrator for the A.R.A.I.A. project. I was responsible for defining the creative vision, building the AR app prototype in SwiftUI with RealityKit, integrating AI models (DALL·E, Runway AI, and OpenAI APIs), and designing the real-time camera tracking and interaction logic using LiDAR and ARKit.

Workflow

The project began with conceptual development, where I mapped out use cases for AI-enhanced AR interactions with static artwork. I then built the app prototype using SwiftUI and RealityKit, focusing first on robust camera tracking and LiDAR-based spatial mapping. I integrated OpenAI and DALL·E APIs to dynamically generate image overlays and used Runway AI to process video and animation content that could be composited in the AR space. The final phase involved testing the app with real-world artworks and refining interaction models to ensure that the augmented elements meaningfully enhanced the original pieces.