Fragment is a minimalistic shader playground using the GLSL language so that everyone can learn, have fun and create super-advanced generative art to share with a simple link or embed in a webpage.
For it to work, you will have to write a vertex shader and a fragment shader. The former is executed only once per vertice (4 times, because 4 corners of the screen) while the latter is executed for each pixel being rendered on screen. In addition, you can specify a Twitter handle (will display at the bottom right) and a short description (will display at the bottom center).
An important part of Fragment is the built-in uniforms. Those are values passed directly from the main JS thread (on CPU) to the shader scope (GPU). They are meant to add extra capabilities and make more fun Fragment sketches! Every uniform
comes with one or many sketch examples to demonstrate their usage and sometimes to combine them.
Fragment.ink is a minimalist playground — a very minimalist playground! On purpose, but also to facilitate its development. As a result, you cannot “create an account on Fragment.ink”, it’s just not a thing right now. Maybe it will at some point, but it’s unlikely.
Attributes are numerical values being passed to the vertex shader and that are paired with each vertice. Generally, each vertex receive a position
attribute, defining its position in the 3D space, but in addition to this, we could have as many kinds of attributes as necessary per vertice (colors, speed, radius, etc.).
position
Type: vec3
This is the position in 3D space for each vertice. Along with the projectionMatrix
matrix and the modelView
matrix uniforms, this is used to determine the position of the projection plane on screen space.
uv
Type: vec2
This is the position in screen space within the projection plane. the .x
property goes from left to right in the interval [0.0, 1.0]
and the .y
property goes from bottom to top in the interval [0.0, 1.0]
. Note that even if the screen ratio is not 1, the intervals are always normalized (within [0.0, 1.0]
).
Passing a varying
version of uv
to the fragment shader is the best way to address a texel position. (most of the provided example to that).
Under the hood, Fragment is using ThreeJS, which is already adding some default uniforms, attributes, #defines
and other GLSL embellishments so that the shaders, by default are running ok on WebGL2 (all this comes with ShaderMaterial). Those uniforms are nice and helpful, but they are not what makes Fragment fun to use!
In addition, Fragment is adding the following uniforms:
modelViewMatrix
Type: mat4
This is the multiplication of the model matrix (the plane on which is displayed the shader) and the view matrix (from the orthographic camera looking at the plane). This is actually a built-in from ThreeJS.
Along with the projectionMatrix
(below) and the position
attributes, this is used to determine the position of the projection plane on screen space.
projectionMatrix
Type: mat4
This is the matrix that captures the caracteristics of the camera used to display the shader. In the case of Fragment, this is an orthographic camera. This is actually a built-in from ThreeJS.
Along with the modelViewMatrix
(above) and the position
attributes, this is used to determine the position of the projection plane on screen space.
resolution
Type: vec2
This is the size of the shader playground in pixels, typically, the dimensions of the canvas element that ThreeJS renderer is using.
In the example bnw-meta, the resolution
is used to compute the varying centeredCoords
, which redefine the coordinates like a cartesian orthonormal space, rather than the typical uv
space.
pointerPositionCentered
Type: vec2
This is giving the 2D position of the mouse pointer (or touch), with the origin (0, 0)
at the center of the screen, the x
axis going from left to right and the y
axis from the bottom to the top. From bottom to top, the x
axis has a length of 2
and the y
axis is proportional, meaning that if you have a wide screen with a ratio 16/9, then the y
axis is going to be 3.55
from left to right. (2 16 / 9 = 3.55). Generally, screen or texture coordinates (uv
) in GLSL have their origin at the bottom-left and then are normalised to have axes of length 1
, no matter the ratio. The idea behind pointerPositionCentered
is to have a more intuitive space that does not suffer from ratio deformation. With pointerPositionCentered
, the best is to use centeredCoords
instead of uv
. See the vertex shader of the example bnw-meta or a demo. (yet, if you prefer to use the regular uv
, you can)
frameCounter
Type: float
This is the index of the current frame, which goes incrementally at a non-constant FPS. The frameCounter
is convenient to create animations, but since the FPS may vary due to performance, the animation may not be always running at the same speed.
clock
Type: float
This is the number of milliseconds since the loading of the page. Under the hood, this is fuelled by performance.now()
. The clock
is convenient to create animations based on actual time measured in seconds. In some cases, this is better to use than frameCounter
, but it really depends on each use case.
mouseDown
Type: bool
This is a value that tells if the mouse pointer or down if the screen is being touched. Note that this applies for both left and right-click. Check the example simple-click for a demo.
scroll
Type: vec2
This vector holds the scroll speed sent from JS. The value scroll.x
is for horizontal scrolling while scroll.y
is for vertical scroll. Both have the default value of 0
(not scrolling) and can be both positive or negative depending on the scroll direction. Since the scroll value is sent by the wheel
event, the values are not normalized in [-1, +1]
and can peak at a few hundred (pixels).
scrollAccumulator
Type: vec2
They are the accumulated values of the above scroll
. Scrolling in the positive direction and then scrolling to about the same amount in the opposite direction will bring the scrollAccumulator
(at the given .x
or .y
) back to zero, or even negative.
enableMicrophone
Type: bool
This boolean lets you know whether the microphone is enabled for this sketch. to enable it, switch on the corresponding blue switch on the editing panel. Once enabled, the sound-related uniforms below will start to contain actual data. Note: your web browser will ask you to confirm the mic access.
microphoneVolume
Type: float
This is the volume of the microphone in the range [0, 1]
. This only works when the microphone is enabled.
microphoneTimeDomain
Type: sampler2D
This is the time-based waveform of the current sound, stored into a 2D texture of dimensions 1024x1
. This value is normalized in [0, 1] and the neutral point is 0.5
(no sound). Since this 2D texture is using only a single line, the .y
coordinate must be fixed at 0.5
, while .x
is on the range [0, 1] just like a regular texture. This only works when the microphone is enabled.
microphoneFft
Type: sampler2D
This is the Fourier transform (frequency domain) windowed on the above microphoneTimeDomain
. This 2D texture is of dimensions 1024x1
where 1024 is the number of frequency bins, from low to high (half of the sampling frequency). Since this 2D texture is using only a single line, the .y
coordinate must be fixed at 0.5
, while .x
is on the range [0, 1] just like a regular texture. This only works when the microphone is enabled.
On this texture, only the .r
(red) component is present, and the values are in decibel in the range [-Infinity, 0] , though in practice thery are usualy in [-200, 0]. The left side of the texture is for low frequency (aka. low pitch, close to 0Hz) and the right side is for high frequency (aka. hight pitch, close to 24kHz)
enableWebcam
Type: bool
This boolean tells whether the webcam is on or off. If on, the image stream is going to be available in the sampler2D
called webcamTexture
. To switch on the camera, you need to toggle the camera switch on the editing panel. The first time you do that, the web browser will ask you to confirm, for security reasons.
webcamTexture
Type: sampler2D
This is the texture as given by the webcam, updated at every frame. For this to actually contain proper data, the webcam toggle must be switched on. In the JS main thread, the webcam image width has the constraint to be 720 pixels, but depending on the hardware available, the size could be different. As a result, the most reliable way to obtain the texture size is to use the GLSL built-in function textureSize
. Follow the Fragment example sketch webcam-simple, and especially the vertex shader, to see how this works.
colorMaps
Type: sampler2D
This is a texture that contains 43 colormaps in one go. The whole texture is 43 pixels in height and 512 pixels in width. You can find a complete example of how to lookup for a specific colormap in the examples colormap-plasma (applied to screen uv) and colormap-metaball (applied to metaball intensity)
previousFrame
Type: sampler2D
This is a texture that contains the rendering of the previous frame. It can be used to create trailing effect with image persistence such as in the persistent-metaball sample.
Limitation: that this is not compatible (yet?) with webcamTexture
.
imageTextures[]
Type: Array of sampler2D
This array can contain up to 10 textures. To add a texture, its URL must be added to the Textures pannel of the editor. If you decide to fill the first slot (#0) then your texture will be available using imageTextures[0]
in the GLSL code.
Check the slideshow sample to get familiar with using external textures!
You can share a short URL to it with the built-in URL-shortener, accessible from the top-right +
button and then share
.
If you want to embed a sketch, you can generate an embed code from the top-right +
menu and then use the embed button.
Since Fragment does not have a concept of session/user, the only ways to save your sketches are to:
Each sketch comes with a text field for a custom description and another one for your Twitter handle. They will nicely display to whomever you share your sketch with.