新万博体育下载_万博体育app【投注官网】

图片

Motivation

?

Todays world is hugely driven by data. Data, however, in certain scenarios is extremely scarce. In this line of research we focus on image augmentation and manipulation in order to increase increase the amount and/or improve the quality of the available data.

For more information please contact? Stephan Brehm.

?

Applications

Image Augmentation

Right: Learned Semantic transformation of the object in the Left Image

Todays world is hugely driven by data. Data, however, in certain scenarios is extremely scarce. A common solution in these scenarios is data augmentation. Data augmentation in general, means transforming data to increase the total amount of data available. Classic data augmentation approaches for images include, cropping, resizing, rotating, illumination changes etc. In this line of research we are focusing on learning Image augmentation techniques. This allows for semantically meaningful changes to an image. For example, we could change the color of an object of interest like the car that is shown above.

?

Due to recent advances in the development of Deep Generative Models, we are nowadays able to do semantic data augmentation and produce new photorealistic examples.

?

Image/Video Deblurring

Results of our Image Deblurring Method. Left: Blurry Input Image Middle (ours): Deblurred Image Right: Groundtruth Sharp Image

In High-Resolution Dual-Stage Multi-Level Feature Aggregation for Single Image and Video Deblurring, we address the problem of dynamic scene motion deblurring. We present a model that combines highresolution
processing with a multi-resolution feature aggregation method for single frame and video deblurring. Our proposed model consists of 2 stages. In the first stage, single image deblurring is performed at a very high-resolution.
For this purpose, we propose a novel network building block that employs multiple atrous convolutions in parallel. We carefully tune the atrous rate of each of these convolutions to achieve complete coverage of a rectangular area of the input. In this way we obtain a large receptive field at a high spatial resolution. The second stage aggregates information across multiple consecutive frames of a video sequence. Here we maintain a high-resolution, but also use multi-resolution features to mitigate the effects of large movements of objects between images. The presented models rank first and fourth in the NTIRE2020 challenges for single image deblurring and video deblurring, respectively. We apply our framework on current benchmarks and challenges and show that our model provides state-of-the art results.

?

Learning Segmentation from Object Color

Changing Object Color allows to identify the object at the pixel level

In Learning Segmentation from Object Color, we propose Color Shift GAN (CSGAN), a method that allows learning to segment an object class without the need for pixel-wise annotations. We exploit a single textual annotation of the basic object color per image to learn the semantics of an object class. By using only a textual basic color annotation of each object, we are able to drastically reduce labeling efforts. We created a dataset of 29,910 images of cars and annotated the basic color of their body works. Our model reaches 61.4% IoU on our test data. CSGAN trained with additional 128 pixel-wise annotations reaches 62.0%. By adding 45,150 unlabeled images to the training of CSGAN we are able to increase IoU to 65.0% without using a single pixel-wise annotation. This verifies that our weak objective is sufficient for learning segmentation.

?

Suche