A selection of work spanning photography, code, algorithms, and web experiments. Each project explores different aspects of visual creation—from manual craft to computational processes.
Photographic Collage
Timeline: 2020–Present | Medium: Photography, Digital Manipulation
This ongoing series explores composition through fragmentation and reassembly. I shoot individual elements—architectural details, street scenes, abstract patterns—and manually compose them into cohesive visual narratives. The work sits somewhere between traditional photomontage and digital collage.
Process
The technique is methodical: photograph individual components with consistent lighting and perspective, catalog by visual properties (form, texture, color), then compose in layers. Some pieces use 5-6 elements; others combine 50+. The challenge is creating unity from disparate sources—making the seams intentional rather than hidden.
Early work focused on architectural geometry—clean lines, repeating patterns, urban infrastructure. More recent pieces incorporate organic elements and embrace controlled chaos. The aesthetic has evolved from pristine compositions to something more textured and layered.
Technical Approach
- Capture: Fujifilm X-T4, prime lenses (23mm, 35mm, 56mm)
- Raw processing: Lightroom for color grading and tonal consistency
- Composition: Photoshop with layer masks, blend modes, and adjustment layers
- Output: High-resolution prints on matte paper
Influences
David Hockney's photo collages, Hannah Höch's Dada montages, the Bechers' typological approach to photography. Also influenced by Suprematist composition principles—Malevich's use of geometric forms in dynamic arrangements.
What I've Learned
Constraints breed creativity. Working with a limited palette of photographed elements forces you to see connections you'd otherwise miss. Also: composition is about rhythm and weight distribution. A successful collage has visual movement—your eye travels through it, not around it.
Generative Art Systems
Timeline: 2019–Present | Stack: JavaScript, p5.js, Canvas API, GLSL
Code-based visual systems that generate unique outputs from algorithmic rules. Not generative AI—these are deterministic systems where I write the algorithm, set parameters, and let the system explore the possibility space. Each run produces something different, but within defined constraints.
Current Projects
Gradient Fields
Smooth color transitions influenced by Perlin noise and flow fields. The system generates organic, flowing gradients that feel hand-painted despite being algorithmically produced. Parameters control color palettes, noise scale, flow direction, and blend modes.
Tech: p5.js, HSB color space, Perlin noise. Real-time manipulation of noise parameters. Exportable as high-res PNGs for print.
Chaos Attractors
Visualizations of chaotic dynamical systems—Lorenz attractors, Rössler attractors, and custom variations. Small changes in initial conditions produce wildly different outputs, creating intricate, never-repeating patterns. The challenge is finding parameters that produce visually interesting (not just mathematically interesting) results.
Tech: Canvas API for performance, differential equations solved using Runge-Kutta methods, WebGL shaders for color mapping.
Recursive Subdivision
Geometric patterns created through recursive splitting of shapes. Start with a simple form (square, circle, triangle), recursively divide according to rules, apply color gradients or noise textures. Produces patterns reminiscent of Islamic tilework, circuit boards, or aerial views of agricultural land.
Tech: Custom JavaScript, no frameworks. Polygon subdivision algorithms, Delaunay triangulation for certain variants.
Philosophy
The goal isn't to replace manual art-making—it's to explore what becomes possible when you can iterate thousands of times in minutes. I write the rules, the system explores variations. It's a collaboration: I provide constraints and aesthetics, the algorithm provides surprise and scale.
Most important lesson: randomness isn't enough. Pure random noise is visually uninteresting. Good generative systems balance control and emergence—structured randomness. Perlin noise over white noise. Constrained color palettes over full RGB. Rules with a little wiggle room.
Technical Challenges
- Performance: Complex systems can freeze browsers. Moved from Processing/Java to JavaScript for web deployment, then to WebGL for heavy computation.
- Determinism: Ensuring the same seed produces the same output across browsers and devices. Important for print reproduction.
- Parameter tuning: Finding the sweet spot between too rigid (boring) and too chaotic (noise). Usually involves generating hundreds of variations and evaluating manually.
- Color theory: Algorithmic color selection is hard. Learned to work in HSB rather than RGB, use restricted palettes, and implement perceptually uniform color spaces.
Tools & Resources
Primary tool: p5.js for most work—good balance of power and simplicity. For performance: raw Canvas API or WebGL shaders. Learning: Tyler Hobbs' essays on generative art, Anders Hoff's writing on constraints, the Coding Train YouTube channel for techniques.
Web Experiments
Timeline: 2021–Present | Stack: Modern JavaScript, WebGL, Canvas API, CSS
The web is an underexplored medium for visual and interactive experiences. Most websites look the same—three-column layouts, hero sections, standard interactions. These experiments push beyond utility into expression: sites as art pieces, interactions as composition, code as medium.
Notable Experiments
Interactive Particle Systems
Mouse-following particle trails that respond to movement speed and direction. Particles spawn based on velocity, inherit momentum, and decay over time. The user becomes part of the composition—their movement creates the visual outcome.
Implementation: Canvas-based for 60fps performance. Particle pool to avoid garbage collection. Quadtree spatial partitioning for collision detection (when needed). The challenge: making it feel organic, not computerized. Required tuning friction coefficients, spawn rates, and decay curves.
Shader-Based Backgrounds
Real-time animated backgrounds using WebGL fragment shaders. Noise-based distortions, color cycling, and procedural textures. The aesthetic draws from VHS glitch art and analog signal interference.
Technical details: Written in GLSL, compiled via Three.js. Time-based uniforms for animation. The beauty of shaders: runs on GPU, stays smooth even on complex compositions. The difficulty: shader code is alien if you're used to JavaScript—takes time to think in parallel computation.
Custom Cursor Effects
Replacing the default cursor with custom visuals—trailing dots, magnetic attraction to links, morphing shapes. Subtle but transforms the feel of a site.
UX consideration: Must not sacrifice usability for aesthetics. The cursor needs to clearly indicate position and clickability. Learned to keep effects subtle on desktop, disable entirely on mobile/tablet.
Scroll-Linked Animations
Elements that reveal, transform, or parallax based on scroll position. Using Intersection Observer API for performance—no scroll event listeners thrashing the main thread.
Best practices: Use CSS transforms (GPU-accelerated) over position changes. Debounce/throttle any JavaScript-heavy operations. Test on low-end devices—what runs at 60fps on a MacBook might stutter on a Chromebook.
Philosophy of Web as Medium
The web is inherently interactive and temporal. Unlike a painting (static) or a video (linear), web experiences are responsive, non-linear, and participatory. The user is co-creator through their interactions.
But interactivity for its own sake is hollow. The best web experiences use interaction to reveal, to invite exploration, to create moments of surprise. Every interaction should feel intentional, not arbitrary.
Technical Constraints
The web has limits: performance varies wildly across devices, browsers have quirks, users expect certain conventions. The art is working within these constraints—progressive enhancement over breaking convention entirely.
- Performance budget: Aim for < 3MB total page weight, 60fps animations, < 2s load time on 3G
- Accessibility: Custom interactions must not break keyboard navigation, screen readers, or reduce motion preferences
- Cross-browser: Test in Chrome, Firefox, Safari. Especially Safari—it's the new IE in terms of lagging web standards
- Mobile: Touch interactions are different from mouse. Hover effects don't work. Design for thumbs, not pointers.
Influences & References
Studios: Active Theory, Resn, Locomotive. Individuals: Bruno Simon (Three.js portfolio), Lynn Fisher (yearly single-div CSS art), Jen Simmons (layout experiments). Communities: CodePen for inspiration, Codrops for tutorials, Awwwards for seeing what's possible.
Technical Writing & Documentation
Timeline: 2018–Present | Topics: Web Development, Algorithms, Creative Coding
Writing as a way of learning. Documenting the process of building, the decisions made, the problems solved, and the lessons learned. Technical writing isn't about showing expertise—it's about making the implicit explicit, externalizing knowledge that usually stays tacit.
Approach
Most technical writing is bad because it prioritizes comprehensiveness over comprehension. It documents everything without explaining anything. Good technical writing is teaching—it meets readers where they are and brings them forward.
My process: build something, break it, fix it, then write about what I learned. The writing focuses on the why (not just the what), common pitfalls, mental models, and when you should (or shouldn't) use a particular approach.
Content Focus
- Web fundamentals: Deep dives into how browsers actually work—rendering, reflow, painting, compositing. Understanding the browser helps you write faster code.
- CSS architecture: How to structure styles for maintainability. When to use cascades vs. components. The trade-offs between different methodologies (BEM, CSS-in-JS, utility-first).
- Performance: Practical optimization—not micro-optimizations that don't matter, but big wins like code splitting, lazy loading, image optimization, caching strategies.
- Creative coding: Techniques for generative art, animation principles, working with Canvas and WebGL, audio visualization.
- Algorithms: Implementations and explanations of useful algorithms—sorting, pathfinding, noise generation, spatial partitioning. Not competitive programming—practical algorithms for real projects.
Writing Philosophy
Clarity over cleverness. Short sentences. Concrete examples. Code snippets that actually run. Diagrams when a picture is worth a thousand words. No jargon without explanation. No "simply" or "just"—if it were simple, the reader wouldn't be reading.
Also: admit what you don't know. Say "I think" instead of declaring absolutes. Provide multiple approaches when there's no single right answer. Link to sources so readers can go deeper.
Technical Stack for Writing
Markdown for portability. Static site generator (currently Astro) for minimal overhead. Syntax highlighting via Prism.js. Interactive examples using CodePen embeds when appropriate. Version control with Git—writing is code, treat it the same way.
Why Write
Three reasons: (1) Writing forces clarity—you can't explain something you don't understand. (2) Documentation helps future you—I reference my own posts constantly. (3) Sharing knowledge compounds—every post helps multiple people, multiplied over time.
Also: imposter syndrome is real, but writing helps. Documenting your learning proves you're making progress. And often, the beginner's perspective is valuable—you remember what was confusing in ways experts have forgotten.
Recent Code
Latest repositories from GitHub