HDR Background Remover
Drag and drop or click to select.
Private and secure
Everything happens in your browser. Your files never touch our servers.
Blazing fast
No uploading, no waiting. Convert the moment you drop a file.
Actually free
No account required. No hidden costs. No file size tricks.
Background removal separates a subject from its surroundings so you can place it on transparency, swap the scene, or composite it into a new design. Under the hood you’re estimating an alpha matte—a per-pixel opacity from 0 to 1—and then alpha-compositing the foreground over something else. This is the math from Porter–Duff and the cause of familiar pitfalls like “fringes” and straight vs. premultiplied alpha. For practical guidance on premultiplication and linear color, see Microsoft’s Win2D notes, Søren Sandmann, and Lomont’s write-up on linear blending.
The main ways people remove backgrounds
1) Chroma key (“green/blue screen”)
If you can control capture, paint the backdrop a solid color (often green) and key that hue away. It’s fast, battle-tested in film and broadcast, and ideal for video. The trade-offs are lighting and wardrobe: colored light spills onto edges (especially hair), so you’ll use despill tools to neutralize contamination. Good primers include Nuke’s docs, Mixing Light, and a hands-on Fusion demo.
2) Interactive segmentation (classic CV)
For single images with messy backgrounds, interactive algorithms need a few user hints—e.g., a loose rectangle or scribbles—and converge to a crisp mask. The canonical method is GrabCut (book chapter), which learns color models for foreground/background and uses graph cuts iteratively to separate them. You’ll see similar ideas in GIMP’s Foreground Select based on SIOX (ImageJ plugin).
3) Image matting (fine-grained alpha)
Matting solves fractional transparency at wispy boundaries (hair, fur, smoke, glass). Classic closed-form matting takes a trimap (definitely-fore/definitely-back/unknown) and solves a linear system for alpha with strong edge fidelity. Modern deep image matting trains neural nets on the Adobe Composition-1K dataset (MMEditing docs), and is evaluated with metrics like SAD, MSE, Gradient, and Connectivity (benchmark explainer).
4) Deep learning cutouts (no trimap)
- U2-Net (salient-object detection) is a strong general “remove background” engine (repo).
- MODNet targets real-time portrait matting (PDF).
- F, B, Alpha (FBA) Matting jointly predicts foreground, background, and alpha to reduce color halos (repo).
- Background Matting V2 assumes a background plate and yields strand-level mattes in real time at up to 4K/30fps (project page, repo).
Related segmentation work is also useful: DeepLabv3+ refines boundaries with an encoder–decoder and atrous convolutions (PDF); Mask R-CNN gives per-instance masks (PDF); and SAM (Segment Anything) is a promptable foundation model that zero-shots masks on unfamiliar images.
What popular tools do
- Photoshop: Remove Background quick action runs “Select Subject → layer mask” under the hood (confirmed here; tutorial).
- GIMP: Foreground Select (SIOX).
- Canva: 1-click Background Remover for images and short video.
- remove.bg: web app + API for automation.
- Apple devices: system-level “Lift Subject” in Photos/Safari/Quick Look (cutouts on iOS).
Workflow tips for cleaner cutouts
- Shoot smart. Good lighting and strong subject–background contrast help every method. With green/blue screens, plan for despill (guide).
- Start broad, refine narrow. Run an automatic selection (Select Subject, U2-Net, SAM), then refine edges with brushes or matting (e.g., closed-form).
- Mind semi-transparency. Glass, veils, motion blur, flyaway hair need true alpha (not just a hard mask). Methods that also recover F/B/α minimize halos.
- Know your alpha. Straight vs. premultiplied produce different edge behavior; export/composite consistently (see overview, Hargreaves).
- Pick the right output. For “no background,” deliver a raster with a clean alpha (e.g., PNG/WebP) or keep layered files with masks if further edits are expected. The key is the quality of the alpha you computed—rooted in Porter–Duff.
Quality & evaluation
Academic work reports SAD, MSE, Gradient, and Connectivity errors on Composition-1K. If you’re picking a model, look for those metrics (metric defs; Background Matting metrics section). For portraits/video, MODNet and Background Matting V2 are strong; for general “salient object” images, U2-Net is a solid baseline; for tough transparency, FBA can be cleaner.
Common edge cases (and fixes)
- Hair & fur: favor matting (trimap or portrait matting like MODNet) and inspect on a checkerboard.
- Fine structures (bike spokes, fishing line): use high-res inputs and a boundary-aware segmenter such as DeepLabv3+ as a pre-step before matting.
- See-through stuff (smoke, glass): you need fractional alpha and often foreground color estimation (FBA).
- Video conferencing: if you can capture a clean plate, Background Matting V2 looks more natural than naive “virtual background” toggles.
Where this shows up in the real world
- E-commerce: marketplaces (e.g., Amazon) often require a pure white main image background; see Product image guide (RGB 255,255,255).
- Design tools: Canva’s Background Remover and Photoshop’s Remove Background streamline quick cutouts.
- On-device convenience: iOS/macOS “Lift Subject” is great for casual sharing.
Why cutouts sometimes look fake (and fixes)
- Color spill: green/blue light wraps onto the subject—use despill controls or targeted color replacement.
- Halo/fringes: usually an alpha-interpretation mismatch (straight vs. premultiplied) or edge pixels contaminated by the old background; convert/interpret correctly (overview, details).
- Wrong blur/grain: paste a razor-sharp subject into a soft background and it pops; match lens blur and grain after compositing (see Porter–Duff basics).
TL;DR playbook
- If you control capture: use chroma key; light evenly; plan despill.
- If it’s a one-off photo: try Photoshop’s Remove Background, Canva’s remover, or remove.bg; refine with brushes/matting for hair.
- If you need production-grade edges: use matting ( closed-form or deep) and check alpha on transparency; mind alpha interpretation.
- For portraits/video: consider MODNet or Background Matting V2; for click-guided segmentation, SAM is a powerful front-end.
What is the HDR format?
High Dynamic Range image
High Dynamic Range (HDR) imaging is a technology that aims to bridge the gap between the human eye's capability to perceive a wide range of luminosity levels and the traditional digital imaging systems' limitations in capturing, processing, and displaying such ranges. Unlike standard dynamic range (SDR) images, which have a limited ability to showcase the extremes of light and dark within the same frame, HDR images can display a broader spectrum of luminance levels. This results in pictures that are more vivid, realistic, and closely aligned to what the human eye perceives in the real world.
The concept of dynamic range is central to understanding HDR imaging. Dynamic range refers to the ratio between the lightest light and darkest dark that can be captured, processed, or displayed by an imaging system. It is typically measured in stops, with each stop representing a doubling or halving of the amount of light. Traditional SDR images conventionally operate within a dynamic range of about 6 to 9 stops. HDR technology, on the other hand, aims to surpass this limit significantly, aspiring to match or even exceed the human eye's dynamic range of approximately 14 to 24 stops under certain conditions.
HDR imaging is made possible through a combination of advanced capture techniques, innovative processing algorithms, and display technologies. At the capture stage, multiple exposures of the same scene are taken at different luminance levels. These exposures capture the detail in the darkest shadows through to the brightest highlights. The HDR process then involves combining these exposures into a single image that contains a far greater dynamic range than could be captured in a single exposure using traditional digital imaging sensors.
The processing of HDR images involves mapping the wide range of luminance levels captured into a format that can be efficiently stored, transmitted, and ultimately displayed. Tone mapping is a crucial part of this process. It translates the high dynamic range of the captured scene into a dynamic range that is compatible with the target display or output medium, all while striving to maintain the visual impact of the scene's original luminance variations. This often involves sophisticated algorithms that carefully adjust brightness, contrast, and color saturation to produce images that look natural and appealing to the viewer.
HDR images are typically stored in specialized file formats that can accommodate the extended range of luminance information. Formats such as JPEG-HDR, OpenEXR, and TIFF have been developed specifically for this purpose. These formats use various techniques, such as floating point numbers and expanded color spaces, to precisely encode the wide range of brightness and color information in an HDR image. This not only preserves the high fidelity of the HDR content but also ensures compatibility with a broad ecosystem of HDR-enabled devices and software.
Displaying HDR content requires screens capable of higher brightness levels, deeper blacks, and a wider color gamut than what standard displays can offer. HDR-compatible displays use technologies like OLED (Organic Light Emitting Diodes) and advanced LCD (Liquid Crystal Display) panels with LED (Light Emitting Diode) backlighting enhancements to achieve these characteristics. The ability of these displays to render both subtle and stark luminance differences dramatically enhances the viewer's sense of depth, detail, and realism.
The proliferation of HDR content has been further facilitated by the development of HDR standards and metadata. Standards such as HDR10, Dolby Vision, and Hybrid Log-Gamma (HLG) specify guidelines for encoding, transmitting, and rendering HDR content across different platforms and devices. HDR metadata plays a vital role in this ecosystem by providing information about the color calibration and luminance levels of the content. This enables devices to optimize their HDR rendering capabilities according to the specific characteristics of each piece of content, ensuring a consistently high-quality viewing experience.
One of the challenges in HDR imaging is the need for a seamless integration into existing workflows and technologies, which are predominantly geared towards SDR content. This includes not only the capture and processing of images but also their distribution and display. Despite these challenges, the adoption of HDR is growing rapidly, thanks in large part to the support of major content creators, streaming services, and electronics manufacturers. As HDR technology continues to evolve and become more accessible, it is expected to become the standard for a wide range of applications, from photography and cinema to video games and virtual reality.
Another challenge associated with HDR technology is the balance between the desire for increased dynamic range and the need to maintain compatibility with existing display technologies. While HDR provides an opportunity to dramatically enhance visual experiences, there is also a risk that poorly implemented HDR can result in images that appear either too dark or too bright on displays that are not fully HDR-compatible. Proper tone mapping and careful consideration of end-user display capabilities are essential to ensure that HDR content is accessible to a wide audience and provides a universally improved viewing experience.
Environmental considerations are also becoming increasingly important in the discussion of HDR technology. The higher power consumption required for the brighter displays of HDR-capable devices poses challenges for energy efficiency and sustainability. Manufacturers and engineers are continuously working to develop more energy-efficient methods of achieving high brightness and contrast levels without compromising the environmental footprint of these devices.
The future of HDR imaging looks promising, with ongoing research and development focused on overcoming the current limitations and expanding the technology's capabilities. Emerging technologies, such as quantum dot displays and micro-LEDs, hold the potential to further enhance the brightness, color accuracy, and efficiency of HDR displays. Additionally, advancements in capture and processing technologies aim to make HDR more accessible to content creators by simplifying the workflow and reducing the need for specialized equipment.
In the realm of content consumption, HDR technology is also opening new avenues for immersive experiences. In video gaming and virtual reality, HDR can dramatically enhance the sense of presence and realism by more accurately reproducing the brightness and color diversity of the real world. This not only improves the visual quality but also deepens the emotional impact of digital experiences, making them more engaging and lifelike.
Beyond entertainment, HDR technology has applications in fields such as medical imaging, where its ability to display a wider range of luminance levels can help reveal details that may be missed in standard images. Similarly, in fields such as astronomy and remote sensing, HDR imaging can capture the nuance of celestial bodies and Earth's surface features with unprecedented clarity and depth.
In conclusion, HDR technology represents a significant advancement in digital imaging, offering an enhanced visual experience that brings digital content closer to the richness and depth of the real world. Despite the challenges associated with its implementation and widespread adoption, the benefits of HDR are clear. As this technology continues to evolve and integrate into various industries, it has the potential to revolutionize how we capture, process, and perceive digital imagery, opening new possibilities for creativity, exploration, and understanding.
Supported formats
AAI.aai
AAI Dune image
AI.ai
Adobe Illustrator CS2
AVIF.avif
AV1 Image File Format
BAYER.bayer
Raw Bayer Image
BMP.bmp
Microsoft Windows bitmap image
CIN.cin
Cineon Image File
CLIP.clip
Image Clip Mask
CMYK.cmyk
Raw cyan, magenta, yellow, and black samples
CUR.cur
Microsoft icon
DCX.dcx
ZSoft IBM PC multi-page Paintbrush
DDS.dds
Microsoft DirectDraw Surface
DPX.dpx
SMTPE 268M-2003 (DPX 2.0) image
DXT1.dxt1
Microsoft DirectDraw Surface
EPDF.epdf
Encapsulated Portable Document Format
EPI.epi
Adobe Encapsulated PostScript Interchange format
EPS.eps
Adobe Encapsulated PostScript
EPSF.epsf
Adobe Encapsulated PostScript
EPSI.epsi
Adobe Encapsulated PostScript Interchange format
EPT.ept
Encapsulated PostScript with TIFF preview
EPT2.ept2
Encapsulated PostScript Level II with TIFF preview
EXR.exr
High dynamic-range (HDR) image
FF.ff
Farbfeld
FITS.fits
Flexible Image Transport System
GIF.gif
CompuServe graphics interchange format
HDR.hdr
High Dynamic Range image
HEIC.heic
High Efficiency Image Container
HRZ.hrz
Slow Scan TeleVision
ICO.ico
Microsoft icon
ICON.icon
Microsoft icon
J2C.j2c
JPEG-2000 codestream
J2K.j2k
JPEG-2000 codestream
JNG.jng
JPEG Network Graphics
JP2.jp2
JPEG-2000 File Format Syntax
JPE.jpe
Joint Photographic Experts Group JFIF format
JPEG.jpeg
Joint Photographic Experts Group JFIF format
JPG.jpg
Joint Photographic Experts Group JFIF format
JPM.jpm
JPEG-2000 File Format Syntax
JPS.jps
Joint Photographic Experts Group JPS format
JPT.jpt
JPEG-2000 File Format Syntax
JXL.jxl
JPEG XL image
MAP.map
Multi-resolution Seamless Image Database (MrSID)
MAT.mat
MATLAB level 5 image format
PAL.pal
Palm pixmap
PALM.palm
Palm pixmap
PAM.pam
Common 2-dimensional bitmap format
PBM.pbm
Portable bitmap format (black and white)
PCD.pcd
Photo CD
PCT.pct
Apple Macintosh QuickDraw/PICT
PCX.pcx
ZSoft IBM PC Paintbrush
PDB.pdb
Palm Database ImageViewer Format
PDF.pdf
Portable Document Format
PDFA.pdfa
Portable Document Archive Format
PFM.pfm
Portable float format
PGM.pgm
Portable graymap format (gray scale)
PGX.pgx
JPEG 2000 uncompressed format
PICT.pict
Apple Macintosh QuickDraw/PICT
PJPEG.pjpeg
Joint Photographic Experts Group JFIF format
PNG.png
Portable Network Graphics
PNG00.png00
PNG inheriting bit-depth, color-type from original image
PNG24.png24
Opaque or binary transparent 24-bit RGB (zlib 1.2.11)
PNG32.png32
Opaque or binary transparent 32-bit RGBA
PNG48.png48
Opaque or binary transparent 48-bit RGB
PNG64.png64
Opaque or binary transparent 64-bit RGBA
PNG8.png8
Opaque or binary transparent 8-bit indexed
PNM.pnm
Portable anymap
PPM.ppm
Portable pixmap format (color)
PS.ps
Adobe PostScript file
PSB.psb
Adobe Large Document Format
PSD.psd
Adobe Photoshop bitmap
RGB.rgb
Raw red, green, and blue samples
RGBA.rgba
Raw red, green, blue, and alpha samples
RGBO.rgbo
Raw red, green, blue, and opacity samples
SIX.six
DEC SIXEL Graphics Format
SUN.sun
Sun Rasterfile
SVG.svg
Scalable Vector Graphics
TIFF.tiff
Tagged Image File Format
VDA.vda
Truevision Targa image
VIPS.vips
VIPS image
WBMP.wbmp
Wireless Bitmap (level 0) image
WEBP.webp
WebP Image Format
YUV.yuv
CCIR 601 4:1:1 or 4:2:2
Frequently asked questions
How does this work?
This converter runs entirely in your browser. When you select a file, it is read into memory and converted to the selected format. You can then download the converted file.
How long does it take to convert a file?
Conversions start instantly, and most files are converted in under a second. Larger files may take longer.
What happens to my files?
Your files are never uploaded to our servers. They are converted in your browser, and the converted file is then downloaded. We never see your files.
What file types can I convert?
We support converting between all image formats, including JPEG, PNG, GIF, WebP, SVG, BMP, TIFF, and more.
How much does this cost?
This converter is completely free, and will always be free. Because it runs in your browser, we don't have to pay for servers, so we don't need to charge you.
Can I convert multiple files at once?
Yes! You can convert as many files as you want at once. Just select multiple files when you add them.