Ever wondered how websites display stunning 3D graphics right in your browser? That’s where Three.js comes in. Three.js is a powerful JavaScript library that makes creating complex 3D graphics accessible to web developers without requiring advanced graphics programming knowledge.

Born as an open-source project in 2010, Three.js has revolutionized web development by bridging the gap between sophisticated 3D rendering and standard web technologies. It provides a simplified layer over WebGL, handling the complex mathematical calculations and GPU interactions that would otherwise require extensive expertise.

Whether you’re building interactive product visualizers, immersive web experiences, data visualizations, or browser-based games, Three.js offers the tools to bring your creative vision to life. Its growing popularity stems from its balance of power and accessibility, making 3D web graphics more mainstream than ever before.

This guide will walk you through everything you need to know about Three.js – from basic setup to advanced techniques – empowering you to create compelling 3D experiences for the modern web.

What Is Three.js?

Three.js is a JavaScript library used for creating and displaying 3D graphics in a web browser using WebGL. It simplifies the process of rendering complex 3D scenes by providing tools for handling geometries, materials, lighting, cameras, and animations. It’s widely used for interactive visualizations, games, and web-based 3D applications

Getting Started with Three.js

Setting Up Your First Three.js Project

YouTube player

Getting started with Three.js is surprisingly straightforward. The 3D JavaScript framework requires minimal setup to create your first web graphics programming project. You’ll need a basic understanding of JavaScript, HTML5, and CSS3 to begin.

First, choose how to include Three.js in your project. There are two main approaches:

<!-- CDN option - quick for prototyping -->
<script src="https://cdn.jsdelivr.net/npm/three@0.132.2/build/three.min.js"></script>

<!-- Or using npm (run this in your terminal) -->
npm install three

Many developers from the Three.js community, including those on GitHub and Stack Overflow, recommend using npm for larger projects. This web 3D environment setup integrates better with modern JavaScript bundlers like those used with React.js, Angular, and Vue.js.

Once installed, your basic HTML structure should look like this:

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8">
    <title>My First Three.js App</title>
    <style>
        body { margin: 0; overflow: hidden; }
        canvas { display: block; }
    </style>
</head>
<body>
    <script src="js/three.min.js"></script>
    <script src="js/main.js"></script>
</body>
</html>

This structure works across Chrome, Mozilla Firefox, and other modern browsers supporting WebGL technology.

Core Building Blocks

Three.js operates using four fundamental components. Let’s break them down.

Scenes act as containers for all 3D objects. Think of a scene as a stage where your 3D content creation happens:

const scene = new THREE.Scene();

Cameras determine how users view your 3D world. The most common is the PerspectiveCamera, which mimics human vision through Three.js camera control:

// Parameters: FOV, aspect ratio, near clipping plane, far clipping plane
const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
camera.position.z = 5;

Renderers are responsible for drawing your 3D scene to screen using browser-based visualization techniques. The WebGL renderer provides the best GPU-accelerated graphics performance:

const renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);

Lights make objects visible and add realism. Without them, your scene would be completely black:

const light = new THREE.AmbientLight(0x404040); // soft white light
scene.add(light);

Mr.doob (Ricardo Cabello), the creator of this JavaScript 3D library, designed these core components to work together seamlessly while providing flexibility for diverse applications.

Creating Your First 3D Scene

See the Pen
Interactive 3D Galaxy Simulation
by Voxelo (@VoXelo)
on CodePen.

Now let’s build a simple interactive 3D content example step-by-step. We’ll create a rotating cube with the DOM rendering capabilities of Three.js:

// Create geometry (the shape)
const geometry = new THREE.BoxGeometry(1, 1, 1);

// Create material (the appearance)
const material = new THREE.MeshBasicMaterial({ color: 0x00ff00 });

// Create mesh (combining geometry and material)
const cube = new THREE.Mesh(geometry, material);

// Add to scene
scene.add(cube);

To add animation and movement, we’ll use the browser 3D animation capabilities of Three.js with the JavaScript animation library features:

function animate() {
    requestAnimationFrame(animate);

    // Rotate the cube
    cube.rotation.x += 0.01;
    cube.rotation.y += 0.01;

    // Render the scene
    renderer.render(scene, camera);
}
animate();

Handling browser resizing is crucial for responsive web-based 3D design:

window.addEventListener('resize', () => {
    // Update camera aspect ratio
    camera.aspect = window.innerWidth / window.innerHeight;
    camera.updateProjectionMatrix();

    // Update renderer size
    renderer.setSize(window.innerWidth, window.innerHeight);
});

Finally, adding basic user controls improves interactivity. The OrbitControls from the web visualization tools provided with Three.js examples let users rotate and zoom the camera:

import { OrbitControls } from 'three/examples/jsm/controls/OrbitControls.js';

const controls = new OrbitControls(camera, renderer.domElement);
controls.enableDamping = true; // adds smooth inertia
controls.dampingFactor = 0.05;

With just these few blocks of code, you’ve created a real-time web graphics application using WebGL simplification through Three.js. This JavaScript graphics API makes client-side rendering of dynamic 3D websites accessible to web developers without requiring extensive GLSL or low-level graphics programming knowledge.

For more complex projects, many developers use tools like Visual Studio Code, integrate with Blender for 3D modeling, or use TypeScript for stronger typing. The open source 3D library nature of Three.js allows integration with numerous technologies in the web browser graphics ecosystem.

Understanding 3D Concepts in Three.js

The Coordinate System

Three.js uses a right-handed coordinate system common in 3D graphics programming. Understanding this system is essential for creating interactive web graphics.

The X-axis extends horizontally (left to right), the Y-axis extends vertically (bottom to top), and the Z-axis points toward the viewer. This client-side rendering framework follows convention:

// Positioning an object in 3D space
cube.position.set(2, 1, -3); // x: 2 units right, y: 1 unit up, z: 3 units away

World space vs local space is a core concept in this JavaScript 3D library. World space coordinates are absolute positions in your 3D scene, while local space coordinates are relative to a parent object:

// Creating a parent-child relationship
const parent = new THREE.Object3D();
parent.add(cube); // cube now uses local space relative to parent

parent.position.set(5, 0, 0);
parent.rotation.y = Math.PI / 4; // 45 degrees rotation

Units in Three.js have no specific real-world equivalent. You decide what a “unit” represents. Some developers using WebGL wrapper libraries match units to real-world measurements. For browser 3D engine applications, consistent scale keeps your project manageable.

Geometries and Meshes

Three.js provides many built-in geometry types through its web 3D programming capabilities:

// Creating different geometries
const box = new THREE.BoxGeometry(1, 1, 1);
const sphere = new THREE.SphereGeometry(1, 32, 32); // radius, widthSegments, heightSegments
const torus = new THREE.TorusGeometry(1, 0.4, 16, 100); // radius, tube, radialSegments, tubularSegments

For 3D web graphics beyond basic shapes, custom geometries offer flexibility:

// Creating a custom geometry
const geometry = new THREE.BufferGeometry();
const vertices = new Float32Array([
    -1.0, -1.0, 0.0,
     1.0, -1.0, 0.0,
     0.0,  1.0, 0.0
]);
geometry.setAttribute('position', new THREE.BufferAttribute(vertices, 3));

Complex objects often combine multiple geometries. This JavaScript animation library technique creates detailed models:

// Creating a snowman by combining spheres
const snowmanGroup = new THREE.Group();

const bottomSphere = new THREE.Mesh(
    new THREE.SphereGeometry(1, 16, 16),
    new THREE.MeshStandardMaterial({ color: 0xffffff })
);
bottomSphere.position.y = 1;

const middleSphere = new THREE.Mesh(
    new THREE.SphereGeometry(0.7, 16, 16),
    new THREE.MeshStandardMaterial({ color: 0xffffff })
);
middleSphere.position.y = 2.5;

snowmanGroup.add(bottomSphere);
snowmanGroup.add(middleSphere);

GPU-accelerated graphics in modern browsers make such 3D mesh creation very efficient compared to traditional Canvas renderer approaches.

Materials and Textures

Materials define how surfaces look in Three.js. This cross-browser 3D library offers several types:

BasicMaterial: Simplest material, not affected by lights. LambertMaterial: Matte surfaces with basic lighting. PhongMaterial: Shiny surfaces with highlights. StandardMaterial: Physically-based rendering (PBR) for realistic results.

// Different material types
const basic = new THREE.MeshBasicMaterial({ color: 0xff0000 });
const lambert = new THREE.MeshLambertMaterial({ color: 0x00ff00 });
const phong = new THREE.MeshPhongMaterial({ 
    color: 0x0000ff,
    shininess: 100 
});
const standard = new THREE.MeshStandardMaterial({ 
    color: 0xffff00,
    roughness: 0.2,
    metalness: 0.8
});

Textures add detail beyond materials. The Three.js documentation recommends loading textures using TextureLoader:

// Loading a texture
const textureLoader = new THREE.TextureLoader();
const texture = textureLoader.load('textures/wood.jpg');

// Using the texture in a material
const material = new THREE.MeshStandardMaterial({ map: texture });

For realistic surfaces, normal and bump maps add perceived detail without extra geometry. This web-based simulation technique is a performance advantage over detailed geometry:

// Adding normal map for surface detail
const normalMap = textureLoader.load('textures/wood_normal.jpg');
material.normalMap = normalMap;
material.normalScale.set(1, 1);

Material properties control appearance. Tools like GitHub and npm provide examples, and frameworks like React Three Fiber integrate these capabilities with React.js:

// Managing material properties
material.roughness = 0.7; // 0 smooth, 1 rough
material.metalness = 0.2; // 0 non-metallic, 1 metallic
material.envMapIntensity = 1.5; // Reflection intensity

The Three.js community on Stack Overflow often shares browser game development techniques using these features. Google Chrome and Mozilla browsers provide excellent support for these HTML5 animation capabilities.

Ricardo Cabello (Mr.doob) designed this library to integrate with WebGL technology while making it accessible. Even beginners can create impressive 3D content creation projects with basic JavaScript knowledge.

For more advanced projects, Blender models can be imported, and GLSL shaders can be customized. When working with Three.js vs Babylon.js, each has strengths, but Three.js excels in web visualization tools for a wide range of projects.

Advanced Rendering Techniques

YouTube player

Lighting Types and Effects

Three.js offers several lighting options that transform flat objects into realistic 3D scenes. The right lighting setup dramatically impacts your web-based visualization.

AmbientLight provides uniform illumination with no direction:

const ambient = new THREE.AmbientLight(0x404040, 0.5); // color, intensity
scene.add(ambient);

DirectionalLight mimics distant light sources like the sun:

const directional = new THREE.DirectionalLight(0xffffff, 1);
directional.position.set(5, 10, 7.5);
scene.add(directional);

PointLight emits light in all directions from a single point:

const point = new THREE.PointLight(0xff9000, 1, 100); // color, intensity, distance
point.position.set(0, 10, 0);
scene.add(point);

SpotLight works like a flashlight, creating focused beams:

const spot = new THREE.SpotLight(0xffffff, 1);
spot.position.set(0, 10, 0);
spot.angle = Math.PI / 6; // 30 degrees
spot.penumbra = 0.1; // softness of edge
scene.add(spot);

Shadows add crucial realism to 3D web graphics. Three.js handles shadow mapping through the WebGL renderer:

// Enable shadows in renderer
renderer.shadowMap.enabled = true;
renderer.shadowMap.type = THREE.PCFSoftShadowMap;

// Configure light to cast shadows
directional.castShadow = true;
directional.shadow.mapSize.width = 1024;
directional.shadow.mapSize.height = 1024;

// Allow objects to cast/receive shadows
mesh.castShadow = true;
floor.receiveShadow = true;

Environment lighting uses HDRI (High Dynamic Range Imaging) for realistic reflections. This real-time web graphics technique is popular among 3D content creation professionals:

// Load HDR environment map
const hdrLoader = new THREE.RGBELoader();
hdrLoader.load('environment.hdr', function(texture) {
    texture.mapping = THREE.EquirectangularReflectionMapping;
    scene.environment = texture;
});

Ricardo Cabello (Mr.doob) integrated these lighting features to match techniques used in professional tools like Blender and Unity.

Cameras in Depth

Three.js offers different camera types for various rendering scenarios. The JavaScript 3D objects you create will appear differently based on your camera choice.

PerspectiveCamera mimics human vision with foreshortening:

// Parameters: FOV, aspect ratio, near clipping, far clipping
const perspective = new THREE.PerspectiveCamera(
    75, // Field of view in degrees
    window.innerWidth / window.innerHeight, // Aspect ratio
    0.1, // Near clipping plane
    1000 // Far clipping plane
);

OrthographicCamera shows objects at their true size regardless of distance:

// Parameters: left, right, top, bottom, near, far
const width = 10;
const height = width / (window.innerWidth / window.innerHeight);
const ortho = new THREE.OrthographicCamera(
    -width/2, width/2, 
    height/2, -height/2, 
    0.1, 1000
);

Field of view affects perception dramatically in Three.js camera control. Low values (35°) create a telephoto effect while high values (120°) create fisheye distortion.

Camera movement needs careful consideration for interactive 3D content. The Three.js community has developed controls like OrbitControls for this purpose:

import { OrbitControls } from 'three/examples/jsm/controls/OrbitControls.js';

const controls = new OrbitControls(camera, renderer.domElement);
controls.enableDamping = true; // smooth movement
controls.dampingFactor = 0.05;
controls.screenSpacePanning = false;
controls.minDistance = 1;
controls.maxDistance = 50;
controls.maxPolarAngle = Math.PI / 2; // prevent going below ground

Three.js vs Babylon.js comparisons often highlight Three.js’ flexible camera system for browser 3D animation.

Post-Processing Effects

Post-processing enhances rendered scenes with visual effects. The browser game development community frequently uses these for atmosphere and style.

Setting up a post-processing pipeline requires EffectComposer:

import { EffectComposer } from 'three/examples/jsm/postprocessing/EffectComposer.js';
import { RenderPass } from 'three/examples/jsm/postprocessing/RenderPass.js';
import { UnrealBloomPass } from 'three/examples/jsm/postprocessing/UnrealBloomPass.js';

// Create composer
const composer = new EffectComposer(renderer);

// Add render pass (base layer)
const renderPass = new RenderPass(scene, camera);
composer.addPass(renderPass);

// Add bloom effect
const bloomPass = new UnrealBloomPass(
    new THREE.Vector2(window.innerWidth, window.innerHeight),
    1.5, // strength
    0.4, // radius
    0.85 // threshold
);
composer.addPass(bloomPass);

// Use composer instead of renderer in animation loop
function animate() {
    requestAnimationFrame(animate);
    composer.render();
}

Common effects include:

Bloom – Creates light bleeding from bright areas:

import { UnrealBloomPass } from 'three/examples/jsm/postprocessing/UnrealBloomPass.js';

Depth of Field – Simulates camera focus:

import { BokehPass } from 'three/examples/jsm/postprocessing/BokehPass.js';

Film Grain – Adds noise for cinematic feel:

import { FilmPass } from 'three/examples/jsm/postprocessing/FilmPass.js';

Performance considerations are critical for web 3D environments. Post-processing is GPU-intensive, especially in WebXR applications:

// Lower resolution for performance
composer.setSize(window.innerWidth * 0.75, window.innerHeight * 0.75);
renderer.setPixelRatio(window.devicePixelRatio * 0.5);

GSAP animation library often pairs with Three.js for orchestrating transitions between effect states. This combination powers many dynamic 3D websites.

When working with TypeScript and modern frameworks like React.js, Angular, or Vue.js, these effects integrate well through libraries like React Three Fiber.

With tools from npm and GitHub, plus documentation from Stack Overflow and Three.js examples, implementing these advanced techniques becomes accessible even to intermediate JavaScript developers with basic WebGL knowledge.

Adding Interactivity

User Input and Controls

See the Pen
CPChallenge: Bugs (butterflies with threejs-toy)
by Tommy Ho (@tommyho)
on CodePen.

Three.js shines when users can interact with your 3D content. Creating responsive interactive web graphics starts with capturing user input.

Mouse and touch interactions form the foundation:

// Mouse position tracking
const mouse = new THREE.Vector2();

function onMouseMove(event) {
    // Convert to normalized device coordinates (-1 to +1)
    mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
    mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
}

window.addEventListener('mousemove', onMouseMove);

Touch events require special handling for mobile browser 3D animation:

function onTouchMove(event) {
    if (event.touches.length > 0) {
        mouse.x = (event.touches[0].clientX / window.innerWidth) * 2 - 1;
        mouse.y = -(event.touches[0].clientY / window.innerHeight) * 2 + 1;
    }
}

window.addEventListener('touchmove', onTouchMove);

Keyboard controls add another dimension. Chrome, Firefox, and other browsers trigger consistent events:

const keyboard = {};

function onKeyDown(event) {
    keyboard[event.code] = true;
}

function onKeyUp(event) {
    keyboard[event.code] = false;
}

window.addEventListener('keydown', onKeyDown);
window.addEventListener('keyup', onKeyUp);

// Use in animation loop
function animate() {
    requestAnimationFrame(animate);

    if (keyboard['KeyW']) {
        camera.position.z -= 0.1;
    }
    if (keyboard['KeyS']) {
        camera.position.z += 0.1;
    }

    renderer.render(scene, camera);
}

OrbitControls provides ready-made camera manipulation for 3D scene manipulation:

import { OrbitControls } from 'three/examples/jsm/controls/OrbitControls.js';

const controls = new OrbitControls(camera, renderer.domElement);
controls.enableDamping = true;
controls.dampingFactor = 0.05;

// Update in animation loop
function animate() {
    requestAnimationFrame(animate);
    controls.update();
    renderer.render(scene, camera);
}

First-person navigation uses PointerLockControls for immersive browser-based visualization:

import { PointerLockControls } from 'three/examples/jsm/controls/PointerLockControls.js';

const controls = new PointerLockControls(camera, document.body);

// Lock pointer on click
document.addEventListener('click', () => {
    controls.lock();
});

These controls integrate with npm packages and GitHub repositories for game-specific functionality in web 3D programming.

Raycasting and Object Selection

Raycasting lets users interact with specific objects in your scene. This WebGL technology detects when a ray from the camera intersects with 3D objects:

const raycaster = new THREE.Raycaster();

function onClick(event) {
    // Update mouse position
    mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
    mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;

    // Cast ray from camera through mouse position
    raycaster.setFromCamera(mouse, camera);

    // Check for intersections
    const intersects = raycaster.intersectObjects(scene.children, true);

    if (intersects.length > 0) {
        const object = intersects[0].object;
        // Do something with the selected object
        object.material.color.set(0xff0000);
    }
}

window.addEventListener('click', onClick);

Detecting clicks on 3D objects enables interactive interfaces. JavaScript game development often uses this for object selection:

// Make objects "selectable" with metadata
const cube1 = new THREE.Mesh(geometry, material);
cube1.userData.name = "Cube 1";
cube1.userData.selectable = true;

const cube2 = new THREE.Mesh(geometry, material);
cube2.userData.name = "Cube 2";
cube2.userData.selectable = true;

// Filter selectable objects in raycaster
const selectableObjects = scene.children.filter(obj => obj.userData.selectable);
const intersects = raycaster.intersectObjects(selectableObjects);

Hover states create responsive feedback. This technique, common in web 3D user interface design, improves usability:

let hoveredObject = null;

function onMouseMove(event) {
    mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
    mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;

    raycaster.setFromCamera(mouse, camera);
    const intersects = raycaster.intersectObjects(selectableObjects);

    // Reset previous hover
    if (hoveredObject) {
        hoveredObject.material.emissive.set(0x000000);
        hoveredObject = null;
    }

    // Set new hover
    if (intersects.length > 0) {
        hoveredObject = intersects[0].object;
        hoveredObject.material.emissive.set(0x333333);
    }
}

Mr.doob (Ricardo Cabello) designed Three.js with these interaction patterns in mind, making them accessible through the JavaScript 3D library’s intuitive API.

Physics and Collision

Basic collision detection can be implemented directly in Three.js:

function checkCollision(obj1, obj2) {
    // Create bounding boxes
    const box1 = new THREE.Box3().setFromObject(obj1);
    const box2 = new THREE.Box3().setFromObject(obj2);

    // Check intersection
    return box1.intersectsBox(box2);
}

// Usage in animation loop
function animate() {
    requestAnimationFrame(animate);

    if (checkCollision(player, obstacle)) {
        console.log("Collision detected!");
    }

    renderer.render(scene, camera);
}

For realistic physics, external libraries like Cannon.js integrate with 3D JavaScript framework capabilities:

// Initialize physics world
const world = new CANNON.World();
world.gravity.set(0, -9.82, 0); // Earth gravity

// Create physics body
const sphereShape = new CANNON.Sphere(1);
const sphereBody = new CANNON.Body({
    mass: 5,
    position: new CANNON.Vec3(0, 10, 0),
    shape: sphereShape
});
world.addBody(sphereBody);

// Create visual representation
const sphereGeometry = new THREE.SphereGeometry(1, 32, 32);
const sphereMaterial = new THREE.MeshStandardMaterial({ color: 0xff0000 });
const sphere = new THREE.Mesh(sphereGeometry, sphereMaterial);
scene.add(sphere);

// Sync in animation loop
function animate() {
    requestAnimationFrame(animate);

    world.step(1/60); // Physics simulation step

    // Copy position from physics to visual
    sphere.position.copy(sphereBody.position);
    sphere.quaternion.copy(sphereBody.quaternion);

    renderer.render(scene, camera);
}

Creating interactive physics-based demos showcases the power of client-side rendering for simulations:

// Add floor
const floorShape = new CANNON.Plane();
const floorBody = new CANNON.Body({
    mass: 0, // static
    shape: floorShape
});
floorBody.quaternion.setFromAxisAngle(
    new CANNON.Vec3(1, 0, 0),
    -Math.PI / 2
);
world.addBody(floorBody);

// Add multiple objects
for (let i = 0; i < 50; i++) {
    const size = Math.random() * 0.5 + 0.1;
    const boxShape = new CANNON.Box(new CANNON.Vec3(size, size, size));
    const boxBody = new CANNON.Body({
        mass: size,
        shape: boxShape,
        position: new CANNON.Vec3(
            Math.random() * 4 - 2,
            Math.random() * 10 + 5,
            Math.random() * 4 - 2
        )
    });
    world.addBody(boxBody);

    // Create matching Three.js object
    const boxGeometry = new THREE.BoxGeometry(size * 2, size * 2, size * 2);
    const boxMaterial = new THREE.MeshStandardMaterial({
        color: Math.random() * 0xffffff
    });
    const box = new THREE.Mesh(boxGeometry, boxMaterial);
    scene.add(box);

    // Store reference for updating
    boxBody.threeMesh = box;
}

React Three Fiber, a popular library combining React.js with Three.js, includes hooks for physics integration:

// React Three Fiber with physics (conceptual example)
function PhysicsBox(props) {
    const [ref, api] = useBox(() => ({ 
        mass: 1,
        position: [0, 10, 0],
        ...props
    }));

    return (
        <mesh
            ref={ref}
            onClick={() => api.applyImpulse([0, 5, 0], [0, 0, 0])}>
            <boxGeometry />
            <meshStandardMaterial color="hotpink" />
        </mesh>
    );
}

When developing for WebXR, physics adds immersion to both virtual reality scenes and augmented reality experiences. Tools like A-Frame, built on Three.js, simplify this integration.

Visual Studio Code with Three.js extensions helps debug these interactive systems. The increasing popularity of 3D web graphics has driven improvements in both tools and documentation.

Optimizing Three.js Applications

Performance Best Practices

Three.js applications can quickly become resource-intensive. Optimizing your code ensures smooth browser 3D animation across devices.

Polygon count directly impacts performance. High-detail models can slow your application:

// Low-poly version for better performance
const detailedSphere = new THREE.SphereGeometry(1, 64, 64); // 8,256 triangles
const simplifiedSphere = new THREE.SphereGeometry(1, 16, 12); // 512 triangles

Use LOD (Level of Detail) to swap models based on distance:

import { LOD } from 'three';

const lod = new LOD();

// Add different detail levels
lod.addLevel(highDetailMesh, 0);    // Use when camera is very close
lod.addLevel(mediumDetailMesh, 10); // Switch at 10 units away
lod.addLevel(lowDetailMesh, 50);    // Switch at 50 units away

scene.add(lod);

Texture size significantly affects GPU memory. WebGL wrapper libraries like Three.js load full-sized textures by default:

// Load appropriate texture sizes
const textureLoader = new THREE.TextureLoader();
let texture;

if (isMobile) {
    texture = textureLoader.load('textures/material_512.jpg');
} else {
    texture = textureLoader.load('textures/material_2048.jpg');
}

Compression techniques reduce file size. Tools from GitHub repositories help prepare assets:

// Using compressed textures
const loader = new THREE.KTX2Loader()
    .setTranscoderPath('js/libs/basis/')
    .detectSupport(renderer);

loader.load('textures/compressed.ktx2', function(texture) {
    material.map = texture;
    material.needsUpdate = true;
});

Object instancing dramatically improves performance when rendering many identical objects:

// Instead of creating 1000 individual meshes
const geometry = new THREE.BoxGeometry(1, 1, 1);
const material = new THREE.MeshStandardMaterial();

// Create instanced mesh
const instancedMesh = new THREE.InstancedMesh(geometry, material, 1000);

// Position each instance
for (let i = 0; i < 1000; i++) {
    const position = new THREE.Vector3(
        Math.random() * 100 - 50,
        Math.random() * 100 - 50,
        Math.random() * 100 - 50
    );

    const matrix = new THREE.Matrix4();
    matrix.setPosition(position);

    instancedMesh.setMatrixAt(i, matrix);
}

scene.add(instancedMesh);

Mr.doob (Ricardo Cabello) built this JavaScript 3D library with performance in mind. The Three.js documentation includes more optimization techniques.

Loading Models and Assets

Three.js supports various 3D file formats. Common options include:

  • glTF (.glb, .gltf): Most recommended format
  • OBJ (.obj): Common but lacks animations
  • FBX (.fbx): Good for animated models
  • COLLADA (.dae): XML-based format

Loading models efficiently is critical for web-based visualization:

import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';

const loader = new GLTFLoader();

// Load a glTF resource
loader.load(
    // Resource URL
    'models/robot.glb',
    // Called when resource is loaded
    function(gltf) {
        scene.add(gltf.scene);

        // Access animations
        const animations = gltf.animations;
        const mixer = new THREE.AnimationMixer(gltf.scene);
        const action = mixer.clipAction(animations[0]);
        action.play();
    },
    // Called while loading is progressing
    function(xhr) {
        console.log((xhr.loaded / xhr.total * 100) + '% loaded');
    },
    // Called when loading has errors
    function(error) {
        console.error('An error happened', error);
    }
);

Creating progress indicators improves user experience:

// HTML for progress bar
// <div id="progress-container"><div id="progress-bar"></div></div>

const progressBar = document.getElementById('progress-bar');
const progressContainer = document.getElementById('progress-container');

// Show progress during load
loader.load('large-model.glb', 
    function(gltf) {
        scene.add(gltf.scene);
        progressContainer.style.display = 'none';
    },
    function(xhr) {
        if (xhr.lengthComputable) {
            const percentComplete = xhr.loaded / xhr.total * 100;
            progressBar.style.width = percentComplete + '%';
        }
    },
    function(error) {
        progressContainer.style.display = 'none';
        console.error('Error loading model:', error);
    }
);

Asset management systems help organize resources:

// Asset manager
class AssetManager {
    constructor() {
        this.textureLoader = new THREE.TextureLoader();
        this.gltfLoader = new GLTFLoader();
        this.audioLoader = new THREE.AudioLoader();
        this.assets = {};
    }

    loadTexture(name, url) {
        return new Promise((resolve) => {
            this.textureLoader.load(url, (texture) => {
                this.assets[name] = texture;
                resolve(texture);
            });
        });
    }

    loadModel(name, url) {
        return new Promise((resolve) => {
            this.gltfLoader.load(url, (gltf) => {
                this.assets[name] = gltf;
                resolve(gltf);
            });
        });
    }

    get(name) {
        return this.assets[name];
    }
}

// Usage
const assets = new AssetManager();

async function loadAllAssets() {
    await assets.loadTexture('grass', 'textures/grass.jpg');
    await assets.loadModel('tree', 'models/tree.glb');
    startGame(); // All assets loaded
}

Tools like Blender can optimize models before export. Many 3D content creation platforms offer Three.js-specific export options.

Mobile Considerations

Mobile devices require special attention for browser-based 3D design. Screen size adaptation is essential:

// Responsive design for mobile
function updateSize() {
    const width = window.innerWidth;
    const height = window.innerHeight;

    // Update camera
    camera.aspect = width / height;
    camera.updateProjectionMatrix();

    // Update renderer
    renderer.setSize(width, height);

    // Use lower pixel ratio on mobile
    const isMobile = /iPhone|iPad|iPod|Android/i.test(navigator.userAgent);
    renderer.setPixelRatio(isMobile ? 1 : window.devicePixelRatio);
}

window.addEventListener('resize', updateSize);

Touch-specific interactions require different handling than mouse events:

// Handle touch for orbit controls
let touchStartX = 0;
let touchStartY = 0;

function onTouchStart(event) {
    if (event.touches.length === 1) {
        touchStartX = event.touches[0].pageX;
        touchStartY = event.touches[0].pageY;
    }
}

function onTouchMove(event) {
    if (event.touches.length === 1) {
        // Calculate drag distance
        const touchX = event.touches[0].pageX;
        const touchY = event.touches[0].pageY;

        const deltaX = touchX - touchStartX;
        const deltaY = touchY - touchStartY;

        // Rotate camera based on touch
        camera.rotation.y += deltaX * 0.01;
        camera.rotation.x += deltaY * 0.01;

        touchStartX = touchX;
        touchStartY = touchY;
    }
}

renderer.domElement.addEventListener('touchstart', onTouchStart, false);
renderer.domElement.addEventListener('touchmove', onTouchMove, false);

Performance tuning for mobile focuses on reducing complexity:

// Detect mobile and adjust settings
if (/iPhone|iPad|iPod|Android/i.test(navigator.userAgent)) {
    // Reduce shadow quality
    renderer.shadowMap.type = THREE.BasicShadowMap;

    // Lower render resolution
    renderer.setPixelRatio(1);

    // Disable certain post-processing effects
    bloomPass.enabled = false;

    // Reduce draw distance
    camera.far = 100;
    camera.updateProjectionMatrix();

    // Use simpler materials
    scene.traverse(function(object) {
        if (object.isMesh) {
            // Replace PBR materials with simpler ones
            if (object.material.isMeshStandardMaterial) {
                object.material = new THREE.MeshLambertMaterial({
                    map: object.material.map,
                    color: object.material.color
                });
            }
        }
    });
}

Chrome and Firefox mobile browsers support WebGL but with varying performance. Testing across devices is vital for web 3D environments.

GPU-accelerated graphics vary widely between devices. Use feature detection to adjust quality:

// Check GPU capabilities
const gl = renderer.getContext();
const debugInfo = gl.getExtension('WEBGL_debug_renderer_info');
const renderer = debugInfo ? gl.getParameter(debugInfo.UNMASKED_RENDERER_WEBGL) : '';

// Adjust based on GPU
if (renderer.indexOf('Apple') !== -1) {
    // iOS-specific optimizations
} else if (renderer.indexOf('Adreno') !== -1) {
    // Qualcomm-specific optimizations
} else if (renderer.indexOf('Mali') !== -1) {
    // Mali-specific optimizations
}

WebAssembly integration can boost performance for complex calculations:

// Using WASM for physics (conceptual example)
const physics = await WebAssembly.instantiateStreaming(
    fetch('physics.wasm')
);

// Use WASM for intensive calculations
function updatePhysics() {
    // Transfer data to WASM
    const positions = new Float32Array(rigidBodies.length * 3);
    // ... fill array with current positions

    // Call WASM function
    const newPositions = physics.exports.stepSimulation(positions);

    // Update Three.js objects with results
    // ...
}

These optimizations make interactive 3D content accessible across devices while maintaining the visual quality that made Three.js popular among web development frameworks.

Integrating Three.js with Other Technologies

Three.js and Modern JavaScript Frameworks

Three.js integrates seamlessly with popular JavaScript frameworks. This flexibility makes it ideal for complex web applications.

React and Three.js work together through React Three Fiber:

import React, { useRef } from 'react';
import { Canvas, useFrame } from '@react-three/fiber';

function RotatingCube() {
  const meshRef = useRef();

  useFrame(() => {
    meshRef.current.rotation.x += 0.01;
    meshRef.current.rotation.y += 0.01;
  });

  return (
    <mesh ref={meshRef}>
      <boxGeometry args={[1, 1, 1]} />
      <meshStandardMaterial color="orange" />
    </mesh>
  );
}

function App() {
  return (
    <Canvas>
      <ambientLight intensity={0.5} />
      <pointLight position={[10, 10, 10]} />
      <RotatingCube />
    </Canvas>
  );
}

This approach capitalizes on React.js component architecture while leveraging the GPU-accelerated graphics of Three.js.

Vue.js offers its own integration patterns:

<template>
  <div ref="container"></div>
</template>

<script>
import * as THREE from 'three';

export default {
  mounted() {
    const container = this.$refs.container;

    // Initialize Three.js scene
    const scene = new THREE.Scene();
    const camera = new THREE.PerspectiveCamera(75, container.clientWidth / container.clientHeight, 0.1, 1000);
    const renderer = new THREE.WebGLRenderer();

    renderer.setSize(container.clientWidth, container.clientHeight);
    container.appendChild(renderer.domElement);

    // Add objects
    const geometry = new THREE.BoxGeometry();
    const material = new THREE.MeshBasicMaterial({ color: 0x00ff00 });
    this.cube = new THREE.Mesh(geometry, material);
    scene.add(this.cube);

    camera.position.z = 5;

    // Animation loop
    const animate = () => {
      requestAnimationFrame(animate);
      this.cube.rotation.x += 0.01;
      this.cube.rotation.y += 0.01;
      renderer.render(scene, camera);
    };
    animate();
  }
}
</script>

Angular developers use similar patterns to embed Three.js:

import { Component, ElementRef, OnInit, ViewChild } from '@angular/core';
import * as THREE from 'three';

@Component({
  selector: 'app-three-scene',
  template: '<div #rendererContainer></div>',
  styles: ['div { width: 100%; height: 400px; }']
})
export class ThreeSceneComponent implements OnInit {
  @ViewChild('rendererContainer') rendererContainer: ElementRef;

  renderer = new THREE.WebGLRenderer();
  scene = new THREE.Scene();
  camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
  cube: THREE.Mesh;

  ngOnInit() {
    this.camera.position.z = 5;

    const geometry = new THREE.BoxGeometry();
    const material = new THREE.MeshNormalMaterial();
    this.cube = new THREE.Mesh(geometry, material);
    this.scene.add(this.cube);
  }

  ngAfterViewInit() {
    this.renderer.setSize(this.rendererContainer.nativeElement.clientWidth, this.rendererContainer.nativeElement.clientHeight);
    this.rendererContainer.nativeElement.appendChild(this.renderer.domElement);
    this.animate();
  }

  animate() {
    requestAnimationFrame(() => this.animate());
    this.cube.rotation.x += 0.01;
    this.cube.rotation.y += 0.01;
    this.renderer.render(this.scene, this.camera);
  }
}

Component-based architecture for 3D elements improves code organization. Mr.doob (Ricardo Cabello) didn’t design Three.js specifically for these frameworks, but its flexible API works well with modern JavaScript patterns.

Three.js and Data Visualization

Three.js excels at data visualization in 3D space. The JavaScript graphics API makes complex data more intuitive.

Creating 3D charts starts with positioning elements in the coordinate system:

function createBarChart(data) {
  const chart = new THREE.Group();

  data.forEach((value, index) => {
    const barGeometry = new THREE.BoxGeometry(1, value, 1);
    const barMaterial = new THREE.MeshLambertMaterial({
      color: new THREE.Color(`hsl(${index * 15}, 100%, 50%)`)
    });

    const bar = new THREE.Mesh(barGeometry, barMaterial);
    bar.position.x = index * 1.5;
    bar.position.y = value / 2;

    chart.add(bar);
  });

  return chart;
}

const salesData = [4, 8, 15, 16, 23, 42];
const chart = createBarChart(salesData);
scene.add(chart);

For displaying data in interactive 3D space, raycasting enables user exploration:

// Interacting with data points
const raycaster = new THREE.Raycaster();
const mouse = new THREE.Vector2();

function onClick(event) {
  // Calculate mouse position in normalized device coordinates
  mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
  mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;

  // Update the picking ray
  raycaster.setFromCamera(mouse, camera);

  // Find intersections
  const intersects = raycaster.intersectObjects(chart.children);

  if (intersects.length > 0) {
    const dataPoint = intersects[0].object;
    const dataIndex = chart.children.indexOf(dataPoint);

    // Show data details
    showDataTooltip(salesData[dataIndex], event.clientX, event.clientY);
  }
}

window.addEventListener('click', onClick);

Transitioning between data states creates engaging presentations:

import { TWEEN } from 'three/examples/jsm/libs/tween.module.min.js';

function updateChartData(newData) {
  newData.forEach((value, index) => {
    const bar = chart.children[index];
    const currentHeight = bar.scale.y;

    // Animate to new height
    new TWEEN.Tween({ height: currentHeight })
      .to({ height: value }, 1000)
      .easing(TWEEN.Easing.Elastic.Out)
      .onUpdate(obj => {
        bar.scale.y = obj.height;
        bar.position.y = obj.height / 2;
      })
      .start();
  });
}

// In animation loop
function animate() {
  requestAnimationFrame(animate);
  TWEEN.update();
  renderer.render(scene, camera);
}

The GSAP animation library also works well for these transitions. The real-time web graphics capabilities make data feel alive.

Three.js and AR/VR

WebXR brings immersive experiences to browsers. Three.js simplifies creating AR and VR applications.

Basic WebXR setup requires just a few lines:

import { VRButton } from 'three/examples/jsm/webxr/VRButton.js';

// Create WebXR button
document.body.appendChild(VRButton.createButton(renderer));

// Enable XR rendering
renderer.xr.enabled = true;

// Use XR animation loop
renderer.setAnimationLoop(function() {
  renderer.render(scene, camera);
});

Creating augmented reality experiences with Three.js and WebXR:

import { ARButton } from 'three/examples/jsm/webxr/ARButton.js';

// Set up AR session
document.body.appendChild(ARButton.createButton(renderer, {
  requiredFeatures: ['hit-test']
}));

renderer.xr.enabled = true;

// Create reticle for placement
const reticle = new THREE.Mesh(
  new THREE.RingGeometry(0.15, 0.2, 32).rotateX(-Math.PI / 2),
  new THREE.MeshBasicMaterial()
);
reticle.matrixAutoUpdate = false;
reticle.visible = false;
scene.add(reticle);

// Controller for interaction
const controller = renderer.xr.getController(0);
controller.addEventListener('select', onSelect);
scene.add(controller);

// Place objects on tap
function onSelect() {
  if (reticle.visible) {
    const model = createModel(); // Your 3D model
    model.position.setFromMatrixPosition(reticle.matrix);
    scene.add(model);
  }
}

// Animation and hit testing
renderer.setAnimationLoop(function(timestamp, frame) {
  if (frame) {
    const referenceSpace = renderer.xr.getReferenceSpace();
    const session = renderer.xr.getSession();

    if (hitTestSourceRequested === false) {
      session.requestReferenceSpace('viewer').then(function(referenceSpace) {
        session.requestHitTestSource({ space: referenceSpace }).then(function(source) {
          hitTestSource = source;
        });
      });
      hitTestSourceRequested = true;
    }

    if (hitTestSource) {
      const hitTestResults = frame.getHitTestResults(hitTestSource);

      if (hitTestResults.length) {
        const hit = hitTestResults[0];
        reticle.visible = true;
        reticle.matrix.fromArray(hit.getPose(referenceSpace).transform.matrix);
      } else {
        reticle.visible = false;
      }
    }
  }

  renderer.render(scene, camera);
});

Building virtual reality scenes requires attention to performance and user comfort:

// VR optimization techniques
function optimizeForVR() {
  // Reduce render resolution for better performance
  renderer.setPixelRatio(1);

  // Use simpler materials
  scene.traverse(function(object) {
    if (object.isMesh) {
      if (object.material.type === 'MeshStandardMaterial') {
        object.material = new THREE.MeshLambertMaterial({
          map: object.material.map,
          color: object.material.color
        });
      }
    }
  });

  // Add grab interactions for controllers
  const rightController = renderer.xr.getController(0);
  rightController.addEventListener('selectstart', onControllerGrab);
  rightController.addEventListener('selectend', onControllerRelease);

  scene.add(rightController);
}

// Add hand models
import { XRHandModelFactory } from 'three/examples/jsm/webxr/XRHandModelFactory.js';

const handModelFactory = new XRHandModelFactory();

// Controller grip for rendering hands
const controllerGrip1 = renderer.xr.getControllerGrip(0);
const hand1 = handModelFactory.createHandModel(controllerGrip1, 'mesh');
controllerGrip1.add(hand1);
scene.add(controllerGrip1);

A-Frame, built on Three.js, simplifies WebXR development with HTML-like syntax:

<script src="https://aframe.io/releases/1.2.0/aframe.min.js"></script>

<a-scene>
  <a-box position="-1 0.5 -3" rotation="0 45 0" color="#4CC3D9"></a-box>
  <a-sphere position="0 1.25 -5" radius="1.25" color="#EF2D5E"></a-sphere>
  <a-cylinder position="1 0.75 -3" radius="0.5" height="1.5" color="#FFC65D"></a-cylinder>
  <a-plane position="0 0 -4" rotation="-90 0 0" width="4" height="4" color="#7BC8A4"></a-plane>
  <a-sky color="#ECECEC"></a-sky>
</a-scene>

The web browser graphics capabilities continue to evolve, with Chrome and Firefox leading WebXR implementation. Unity and Unreal Engine developers often find Three.js an accessible alternative for web-based immersive projects.

Three.js vs Babylon.js comparisons often note that while Babylon has more built-in VR features, Three.js offers greater flexibility and a smaller footprint for custom WebXR experiences.

Conclusion

Understanding what Three.js is opens up limitless possibilities for web development. This JavaScript 3D library transforms ordinary websites into interactive 3D environments that engage users in ways previously impossible with standard HTML and CSS. By simplifying WebGL programming, Three.js makes browser-based visualization accessible to developers without specialized graphics knowledge.

Web 3D performance continues to improve across Chrome, Firefox, and other modern browsers. The open source nature of Three.js encourages community contributions on GitHub, with developers constantly expanding its capabilities. From interactive content creation to complex data visualization, Three.js bridges the gap between traditional web technologies and advanced 3D graphics.

Whether you’re using it standalone or integrating with React.js, Angular, or Vue.js, Three.js remains the most flexible solution for bringing real-time web graphics to life. As WebXR evolves, this browser 3D engine will only become more valuable for creating immersive digital experiences.

Author

Bogdan Sandu is the principal designer and editor of this website. He specializes in web and graphic design, focusing on creating user-friendly websites, innovative UI kits, and unique fonts.Many of his resources are available on various design marketplaces. Over the years, he's worked with a range of clients and contributed to design publications like Designmodo, WebDesignerDepot, and Speckyboy among others.