Image-based Material Editing

[Concepts]

[Depth Estimation]

[Gross, Specular, and Highlight Detection]

[Re-texturing, Transparency, and Translucency]

[Environment Map and BRDF Replacement]

[Results]

[Acknowledgements]

[References]

From left to right: translunceny, original, BRDF replacement, transparency, and re-texturing of a delft.

 


We implement Khan's amazing work to modify the materials of an object from a single high-dynamic-range photograph.

Concepts

Khan et al. states that human visual system is more sensitive to local coherence than global distortion [Khan et al. 2006]. This concept is applied to the whole system, from depth estimation to BRDF replacement.

In this project, we try to make an interactive material editing system. Because tone mapping often causes large latency, a global tone mapping operator in [Reinhard et al. 2002] is applied in the editing phase. After user thinks the result is good enough, he(she) clicks Save Result to generate the final image. The tone mapping method we use is a hybird of photographic operator [Reinhard et al. 2002] and bilateral filtering [Durand and Dorsey 2002]. It was developed in our Project 1.

Depth Estimation

Since a perfect depth map of the object is not required for this application, we take the inverse of intensity as the depth. A bilateral filter is applied to remove the details (texture). Sigmoidal compression is applied before filtering.

To speed up this process, the filter coefficients are pre-calculated and saved in the table. User can also choose to remove the highlight before depth estimation. Below is an example.

Besides depth, we also need the gradient field and normal of the object for shading and re-texturing. Gradients (dx, dy) are approximated using forward finite difference and normal is the cross product of [1, 0, dx] and [0, 1, dy]. The spline function described in the paper is used to reshape the gradient field. User can choose the degree of reshaping.

Gross, Specular, and Highlight Detection

For gross and specular effect, the proposed idea is adjust the intensity of the pixel polynomially if the intensity of that is higher than some threshold. This paper uses three parameter in order to let user adjust it. However the default value that this paper suggests has wrong scale. We think the appropriate scale of beta of gross effect is located in 0.0~1.0 instead of 20 suggested in the paper.

The strategy to detect highlight is making histogram. The pixel value that has lowest derivative of histogram is the approximation start of highlight. But we don't think this heuristic way would work for most cases. The user-defined start of highlight can give better result.

Re-texturing, Transparency, and Translucency

Re-texturing can be completed by warping texture according to the gradient of depth. Then warped texture will be blended with the object using the matting equation:

For transparency and translucency, T in the matting equation is the inapainted image of the source. Following is the inpainted images and their corresponding transparent and translucency effect.

 

However, the matting eqation is not good when using proposed method. So we improve the strategy by multiply with the intensity. See following equation:

Following fiqure shows the result before and after our improvement.

Environment Map and BRDF Replacement

A single photograph can at most contains half information of the environment. As how it is done in the paper, we extrude the background image after inpainting to a hemishpere. The other hemishpere is just a duplicate.

Since we can not access to complicated BRDF dataset, we use Ward's model [Ward 2002]. User can adjust the parameters and color. The specular gains are assumed to be same among three color channels. To speed up the process, we only pre-sample 200 points in the environment map during the inpainting process. Click "Inpaint" each time may result in a different image. No post-filter is applied in this implementation so some defacts are visible.

We find the limitation of this method is that the photograph must include some light sources. Otherwise the result may look fake. We observe many unnatural results when the sky is not included in the photograph in an outdoor environment.

Results

The program and source code can be download [here]. The input files are an hdr image (.hdr) and a matte (.bmp). User have to draw the matte manually. Please contact me if you find any bug or you need any help.

Here are some inputs and results.

Bottle

[hdr][mat]

Vase

[hdr][mat]

Horse

[hdr][mat]

 

BRDF
BRDF
BRDF
BRDF
Retexture
Retexture
Translucency
Transparency
Transparency
Transparency
Dark Glass
Retexture
Retexture
BRDF
BRDF
Gross
Dark Glass
BRDF
Transparency
Translucency
BRDF
Transparency
     

 

Acknowledgements

We thank Erum Arif Khan for his testdata and clarifications about our brainless questions.

References

  1. Erum Arif Khan, Erik Reinhard, Roland Fleming, and Heinrich Buelthoff, Image-based Material Editing, ACM Transactions on Graphics, in Proceedings of Siggraph, Boston, USA, August 2006.
  2. Gregory J. Ward, Measuring and Modeling Anisotropic Reflection, in Proceedings of the 19th Annual Conference on Computer Graphics and interactive Techniques J. J. Thomas, Ed. SIGGRAPH '92. ACM Press, New York, NY, 265-272.
  3. Paul E. Debevec and Jitendra Malik. Recovering High Dynamic Range Radiance Maps from Photographs, in SIGGRAPH 1997.
  4. Fredo Durand and Julie Dorsey, Fast Bilateral Filtering for the Display of High Dynamic Range Images, in SIGGRAPH 2002.
  5. Erik Reinhard, Michael Stark, Peter Shirley, and Jim Ferwerda, Photographics Tone Reproduction for Digital Images, in SIGGRAPH 2002.

© 2006 Chia-Kai Liang and Chihyuan Chung, NTUEE