Title | On-Line Statistical Analysis Based Fast Mode Decision for Multi-View Video Coding |
Author | Chia-Chi Chan (Dept. Communication Engineering, National Central University, Taiwan), Jheng-Ping Lin (ZyXEL Corp., Taiwan), Chih-Wei Tang (Dept. Communication Engineering, National Central University, Taiwan) |
Page | pp. 478 - 481 |
Keyword | Multi-view video coding, fast mode decision, statistical analysis, RD cost, motion and disparity estimation |
Abstract | The high computational complexity of multi-view video codecs makes it necessary to speed up for their realization in consumer electronics. Since fast encoding algorithms are expected to adapt to different video sequences, this paper proposes a fast algorithm that consists of fast mode decision and fast disparity estimation for multi-view video coding. The fast mode decision algorithm applies to both temporal and inter-view predictions. The candidates for mode decision are reduced based on a set of thresholds. Differ from the previous fast mode decision algorithms for MVC, this scheme determines the thresholds according to the on-line statistical analysis of motion and disparity costs of the first GOP in each view. Since the inter-view prediction is time consuming, we propose a fast disparity estimation algorithm to save encoding time. Experimental results show that our proposed scheme reduces the computational complexity significantly with negligible degradation of coding efficiency. |
Title | Optimal Rate Allocation for View Synthesis along a Continuous Viewpoint Location in Multiview Imaging |
Author | Vladan Velisavljevic (Deutsche Telekom Laboratories, Germany), Gene Cheung (National Institute of Informatics, Japan), Jacob Chakareski (Ecole Polytechnique Federale de Lausanne, Switzerland) |
Page | pp. 482 - 485 |
Keyword | Multi-view imaging, Rate allocation |
Abstract | We consider the scenario of view synthesis via depth-image based rendering in multi-view imaging. We formulate a resource allocation problem of jointly assigning an optimal number of bits to compressed texture and depth images such that the maximum distortion of a synthesized view over a continuum of viewpoints between two encoded reference views is minimized, for a given bit budget. We construct simple yet accurate image models that characterize the pixel values at similar depths as first-order Gaussian auto-regressive processes. Based on our models, we derive an optimization procedure that numerically solves the formulated min-max problem using Lagrange relaxation. Through simulations we show that, for two captured views scenario, our optimization provides a significant gain (up to 2dB) in quality of the synthesized views for the same overall bit rate over a heuristic quantization that selects only two quantizers - one for the encoded texture images and the other for the depth images. |
Title | Panoramic Scene Generation from Multi-view Images with Close Foreground Objects |
Author | Soon-Young Lee (Seoul National University, Republic of Korea), Jae-Young Sim (Ulsan National Institute of Science and Technology, Republic of Korea), Chang-Su Kim (Korea University, Republic of Korea), Sang-Uk Lee (Seoul National University, Republic of Korea) |
Page | pp. 486 - 489 |
Keyword | 3D scene representation, Image panorama, view expansion, multi-view image processing |
Abstract | An algorithm to generate a panorama from multi-view images, which
contain foreground objects with varying depths, is proposed in this
work. The proposed algorithm constructs a foreground panorama and a
background panorama separately, and then merges them into a complete
panorama. First, the foreground panorama is obtained by finding the
translational displacements of objects between source images. Second,
the background panorama is initialized using warped source images and
then optimized to preserve spatial consistency and satisfy visual
constraints. Then, the background panorama is extended by inserting
seams and merged with the foreground panorama. Experimental results
demonstrate that the proposed algorithm provides visually satisfying
panoramas with all meaningful foreground objects, but without severe
artifacts in the backgrounds. |
Title | A Sub-Pixel Virtual View Synthesis Method for Multiple View Synthesis |
Author | Xin Tong, Ping Yang (Tsinghua University, China), Xiaozhen Zheng, Jianhua Zheng (Hisilicon Technologies Co. Ltd., China), Yun He (Tsinghua University, China) |
Page | pp. 490 - 493 |
Keyword | SPVVS, virtual view synthesis |
Abstract | A sub-pixel virtual view synthesis (SPVVS) method is proposed in this paper. In the proposed method, by promoting the sampling rate of the target virtual view, the sub-pixel information of the corresponding pixel position among multiple input views with sub-pixel displacement can be utilized. A directional adaptive image interpolation is manipulated to generate a high resolution intermediate image, which is then down-sampled to obtain the target synthesized virtual view. The realization procedure of SPVVS is also presented. Experiment result shows significant improvements on subjective quality compared to traditional integer pixel synthesis method. Artifacts such as ‘hat’ effects can be significantly reduced. |
Title | Improving the Visual Quality of AVC/H.264 by Combining It with Content Adaptive Depth Map Compression |
Author | Christian Keimel, Klaus Diepold (TU Muenchen, Germany), Michel Sarkis (Sony Deutschland GmbH, Germany) |
Page | pp. 494 - 497 |
Keyword | AVC/H.264, 3D scene analysis, 3DTV, Depth map comression, content adaptive meshing |
Abstract | The future of video coding for 3DTV lies in the combination of depth maps and corresponding textures. Most current video coding standards, however, are only optimized for visual quality and are not able to efficiently compress depth maps. We present in this work a content adaptive depth map meshing with tritree and entropy encoding for 3D videos. We show that this approach outperforms the intra frame prediction of AVC/H.264 for the coding of depth maps of still images. We also demonstrate by combining AVC/H.264 with our algorithm that we are able to increase the visual quality of the encoded texture on average by 6~dB. This work is currently limited to still images but an extension to intra coding of 3D video is straightforward. |
Title | Error Concealment for MVC and 3D Video Coding |
Author | Olgierd Stankiewicz, Krzysztof Wegner, Marek Domański (Poznan University of Technology, Poland) |
Page | pp. 498 - 501 |
Keyword | Error concealment, MVC, 3D video, depth maps, cross-checking |
Abstract | In this paper we propose a novel approach to error concealment that can be applied to MVC and other 3D video coding technologies. The image content, that is lost due to errors, is recovered with use of multiple error-concealment techniques. In our work we have used three techniques: well-known temporal- and intra-based techniques and a novel inter-view technique. Proposed inter-view recovery employs Depth Image Based Rendering (DIBR), which requires neighboring views and corresponding depth maps. Those depth maps can be delivered in the bit-stream or estimated in the receiver. In order to obtain the final reconstruction, the best technique is selected locally. For that, an original recovery quality measurement method, based on cross-checking, has been proposed. The idea has been implemented and assessed experimentally, with use of 3D video test sequences. The objective and subjective results show that the proposed approach provide good quality of reconstructed video. |
Title | Fast Stereo Matching with Predictive Search Range |
Author | Yu-Cheng Tseng, Po-Hsiung Hsu, Tian-Sheuan Chang (Dept. of Electronics Engineering & Institute of Electronics, National Chiao Tung University, Taiwan) |
Page | pp. 506 - 509 |
Keyword | disparity estimation, stereo matching |
Abstract | Local stereo matching could deliver accurate disparity maps by the associated method, like adaptive support-weight, but suffers from the high computational complexity, O(NL), where N is pixel count in spatial domain, and L is search range in disparity domain. This paper proposes a fast algorithm that groups similar pixels into super-pixels for spatial reduction, and predicts their search range by simple matching for disparity reduction. The proposed algorithm could be directly applied to other local stereo matching, and reduce its computational complexity to only 8.2%-17.4% with slight 1.5%-3.2% of accuracy degradation. |
Title | Influences of Frame Delay and Packet Loss between Left and Right Frames in Stereoscopic Video Communications |
Author | Shuliang Lin, Yuichiro Sawa, Norishige Fukushima, Yutaka Ishibashi (Nagoya Institute of Technology, Japan) |
Page | pp. 510 - 513 |
Keyword | stereo video, network delay, packet loss, subjective assessment, inter-media synchronization |
Abstract | This paper analyzes the influences of frame delay and packet loss in stereoscopic vision when stereoscopic video transferred over a IP network.
We employ live action videos which are transferred to a head-mount-display (HMD) and do the assessment on stereoscopic perception.
As a result, we found that speed and movement direction of the attention object play a great role on the deterioration when frame delay and packet loss occurs. |
Title | A Novel Inloop Filter for Video-Compression Based on Temporal Pixel Trajectories |
Author | Marko Esche, Andreas Krutz, Alexander Glantz, Thomas Sikora (Technische Universität Berlin, Germany) |
Page | pp. 514 - 517 |
Keyword | video-compression, temporal inloop filter, pixel trajectories, deblocking |
Abstract | The objective of this work is to investigate the performance
of a new inloop filter for video compression, which uses temporal
rather than spatial information to improve the quality of
reference frames used for prediction. The new filter has been
integrated into the H.264/AVC baseline encoder and tested on
a wide range of sequences. Experimental results show that
the filter achieves a bit rate reduction of up to 12% and more
than 4% on average without increasing the complexity of either
encoder or decoder significantly. |
Title | Fast Rate-Distortion Optimized Transform for Intra Coding |
Author | Xin Zhao (Institute of Computing Technology, Chinese Academy of Sciences; Graduate University of Chinese Academy of Sciences, China), Li Zhang, Siwei Ma, Wen Gao (Institute of Digital Media, Peking University, China) |
Page | pp. 518 - 521 |
Keyword | rate-distortion optimization (RDO), Intra coding, mode-dependent directional transform (MDDT), rate-distortion optimized transform (RDOT) |
Abstract | In our previous work, rate-distortion optimized transform (RDOT) is introduced. The proposed RDOT achieves remarkable coding gain for KTA Intra coding, but the computational complexity is increased drastically at the encoder. To solve this problem, we propose a fast RDOT scheme using macroblock- and block-level R-D cost thresholding. With the proposed methods, unnecessary mode trials can be efficiently skipped from the encoding process. Experimental results show that, with negligible performance degradation, about 88.9% of the total encoding time is saved. |
Title | A Hierarchical Variable-Sized Block Transform Coding Scheme for Coding Efficiency Improvement on H.264/AVC |
Author | Bumshik Lee, Jaeil Kim, Sangsoo Ahn, Munchurl Kim (Korea Advanced Institute of Science and Technology, Republic of Korea), Hui Yong Kim, Jongho Kim, Jin Soo Choi (Electronic Telecommunications Research Institute (ETRI), Republic of Korea) |
Page | pp. 522 - 525 |
Keyword | Quadtree Transform, Variable Block-size Transform, Discrete Cosine Transform, H.264/AVC |
Abstract | In this paper, a rate-distortion optimized variable block transform coding scheme is proposed based on a quadtree-structured transform for macroblock (MB) coding with a set of the order-4 and -8 integer cosine transform (ICT) kernels of H.264/AVC as well as a new order-16 ICT kernel. The set of order-4, -8 and -16 ICT kernels are applied for inter-predictive coding in square (4x4, 8x8 or 16x16) or non-square (16x8 or 8x16) transform for each MB in a quadtree structured manner. The proposed quadtree-structured variable block transform scheme using the order-16 ICT kernel achieves significant bitrate reduction up to 15%, compared to the High profile of H.264/AVC. Even if the number of candidates for the transform types increases, the encoding time can be reduced to average 4-6% over the H.264/AVC. |
Title | Enhanced Region-Based Adaptive Interpolation Filter |
Author | Shohei Matsuo, Yukihiro Bandoh, Seishi Takamura, Hirohisa Jozawa (NTT Cyber Space Laboratories, NTT Corporation, Japan) |
Page | pp. 526 - 529 |
Keyword | motion compensation, adaptive interpolation filter, region-division, edge, image locality |
Abstract | Adaptive interpolation filter (AIF) was proposed to improve motion compensation. The conventional AIF optimizes the filter coefficients on a frame-by-frame basis. However, when the image is divided into multiple regions, each of which has different characteristics, the coding efficiency could improve by performing optimization on a region-by-region basis. In this paper, we proposed a region-based AIF (RBAIF). Simulation results showed that RBAIF offered about 0.43% and 5.05% higher coding gain than the conventional AIF and the H.264/AVC filter, respectively. |
Title | Fractional-Sample Motion Compensation Using Generalized Interpolation |
Author | Haricharan Lakshman, Benjamin Bross, Heiko Schwarz, Thomas Wiegand (Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute, Germany) |
Page | pp. 530 - 533 |
Keyword | video coding, motion-compensated prediction, reference picture upsampling, B-splines |
Abstract | Typical interpolation methods in video coding perform filtering of reference picture samples using FIR filters for motion compensated
prediction. This process can be viewed as a signal decomposition using basis functions which are restricted by the interpolating constraint. Using the concept of generalized interpolation provides a greater degree of freedom for selecting basis functions. We implemented generalized interpolation using a combination of IIR and FIR filters. The complexity of the proposed scheme is comparable to that of an 8-tap FIR filter. Bit rate savings up to 20% compared to the H.264/AVC 6-tap filter are shown. |
Title | Image Coding Approach Based On Image Decomposition |
Author | Yunhui Shi, Yanli Hou, Baocai Yin, Wenpeng Ding (Beijing University of Technology, China) |
Page | pp. 534 - 537 |
Keyword | image decomposition, texture synthesis, region selection, image coding |
Abstract | Textures in many images or video scenes are difficult to code because of the large amount of visible detail. This paper proposes an image coding approach to solve this problem, in which we incorporate image decomposition and texture synthesis technology into the image coding framework. The key idea of our approach is to first decompose the original image into cartoon component u and texture component v with different basic characteristics, and then to synthesize the selected texture regions in texture component v. The cartoon component u and the non-synthetic regions in texture component v are compressed by JPEG. Experimental results show bit-rate savings of over 30% compared with JPEG at similar visual quality levels. |
Title | A Real-Time System of Distributed Video Coding |
Author | Kazuhito Sakomizu, Takahiro Yamasaki, Satoshi Nakagawa, Takashi Nishi (Oki Electric Industry Co., Ltd., Japan) |
Page | pp. 538 - 541 |
Keyword | distributed video coding, real-time system, Slepian-Wolf theorem, Wyner-Ziv theorem |
Abstract | This paper presents a real-time system of distributed video coding (DVC). The decoding process of DVC is normally complex, which causes difficulty in real-time implementation. To address this problem, we propose a new configuration of DVC with three methods. Then we implement the system with parallelization techniques. Experimental results show that the encoder on i.MX31 400 MHz could operates at about CIF 13 fps, and the decoder on Core 2 Quad 2.83 GHz operates at more than CIF 30 fps. |
Title | Block-Based Second Order Prediction on AVS-Part 2 |
Author | Binbin Yu, Shangwen Li, Lu Yu (Institute of Information and Communication Engineering, Zhejiang University, China) |
Page | pp. 542 - 545 |
Keyword | Block-based Second Order Prediction, Motion-compensated prediction, Mode prediction, Directional operators, AVS |
Abstract | AVS-Part 2 is a mainstream video coding standard with high compression efficiency similar to H.264/AVC. A technique named Second Order Prediction (SOP) has been presented based on H.264/AVC to decrease the signal correlation after motion-compensated prediction. To achieve better coding performance, this paper presents a method named Block-based Second Order Prediction (BSOP) to ameliorate SOP to adapt to the features of the motion-compensation in AVS-P2 with analysis and demonstration in detail. Experimental results show that the proposed BSOP can outperform AVS-P2 P-picture coding by 3.99% bit-rate saving (0.126dB BD-PSNR gain) on average, and performs better than SOP implemented on AVS by 1.81% bit-rate saving. |
Title | Improved Local PDF Estimation in the Wavelet Domain for Generalized Lifting |
Author | Julio C. Rolon (National Polytechnic Institute, Mexico), Philippe Salembier (Technical University of Catalonia, Spain) |
Page | pp. 546 - 549 |
Keyword | Generalized lifting, wavelets, pdf estimation, lossy image coding |
Abstract | Generalized Lifting has been studied for lossy image compression in [2,3]. It has been demonstrated that the method achieves a significant reduction of the wavelet coefficients energy and entropy. The definition of the GL relies on an estimation of the pdf of the pixel to encode conditioned to a surrounding context. The objective of this paper is to present an improved method for the estimation of the pdf at the local level. We follow the idea of self similarity proposed in [1] for denoising, and propose to estimate the pdf using all the causal contexts within a window. Experimental results show an important increment in the energy and entropy gains when compared to previous strategies [2,3]. |
Title | Image Coding by Using Non-Linear Texture Decomposition and Image Summarization |
Author | Chihiro Suzuki, Takamichi Miyata, Yoshinori Sakai (Tokyo Institute of Technology, Japan) |
Page | pp. 550 - 553 |
Keyword | Texture Synthesis, TV-regularization, Bidirectional Similarity, Image Coding |
Abstract | TV-regularization can be used to decompose any natural image into a structure image(S) and a texture image(T).
We proposed a novel image coding method that is coding separately these images.
We make a compaction image that has all texture patterns.
Then the compaction image is divided into S and T.
Then, the encoder sends the compaction’s S, T and input’s S to the decoder.
At the decoder, original-size T is synthesized from compaction’s S by matching the compaction’s S and the original-size S. |
Title | Coding Efficiency Improvement Using Inter-Picture Processing of Quantization Error |
Author | Kenji Sugiyama, Naoya Sagara, Masao Arizumi (Seikei University, Japan) |
Page | pp. 554 - 557 |
Keyword | Compatibility, I-picture coding, B-picture coding, Motion compensation, Quantization error |
Abstract | As standard video encoder techniques have matured, their rate of improvement has slowed. As an alternative, a new coding concept with semi-compatibility has been proposed, and an enhancement to I-pictures efficiency has been discussed. This applied method reduces the quantization error using motion compensated inter-picture processing. In this report, we apply this method to P-pictures to improve the efficiency of B-pictures. The quantization error component of the prediction signal is canceled by averaging the bi-directional prediction. Experiments using MPEG-4 show significant improvement in the coding efficiency using the proposed method. The maximum PSNR gain reaches 2.3 dB in a static sequence. At least 0.5 dB can be achieved in a high motion sequence. |
Title | H.264/AVC to Wavelet-Based Scalable Video Transcoding Supporting Multiple Coding Configurations |
Author | Eduardo Peixoto, Toni Zgaljic, Ebroul Izquierdo (Queen Mary, University of London, U.K.) |
Page | pp. 562 - 565 |
Keyword | Transcoding, Scalable Video Coding |
Abstract | Scalable Video Coding (SVC) enables low complexity adaptation of the compressed video, providing an efficient solution for video content delivery through heterogeneous networks and to different displays. However, legacy video and most commercially available content capturing devices use conventional non-scalable coding, e.g., H.264/AVC. This paper proposes an efficient transcoder from H.264/AVC to a wavelet-based SVC to exploit the advantages offerend by the SVC technology. The proposed transcoder is able to cope with different coding configurations in H.264/AVC, such as IPP or IBBP with multiple reference frames. To reduce the transcoder's complexity, motion information and presence of the residual data extracted from the decoded H.264/AVC video are exploited. Experimental results show a good performance of the proposed transcoder in terms of decoded video quality and system complexity. |
Title | Edge-Adaptive Transforms for Efficient Depth Map Coding |
Author | Godwin Shen, Woo-shik Kim, Sunil Kumar Narang, Antonio Ortega (University of Southern California, U.S.A.), Jaejoon Lee, HoCheon Wey (Samsung Advanced Institute of Technology, Republic of Korea) |
Page | pp. 566 - 569 |
Keyword | Multiview plus depth (MVD), Depth coding, Rate-distortion optimization |
Abstract | In this work a new set of edge-adaptive transforms (EATs) is presented as an alternative to the standard DCTs used in image and video coding applications. These transforms avoid filtering across edges in each image block, thus, they avoid creating large high frequency coefficients. These transforms are then combined with the DCT in H.264/AVC and a transform mode selection algorithm is used to choose between DCT and EAT in an RD-optimized manner. These transforms are applied to coding depth maps used for view synthesis in a multi-view video coding system, and provides up to 29% bit rate reduction for a fixed quality in the synthesized views. |
Title | Direction-Adaptive Hierarchical Decomposition for Image Coding |
Author | Tomokazu Murakami, Keita Takahashi, Takeshi Naemura (The University of Tokyo, Japan) |
Page | pp. 570 - 573 |
Keyword | Image decomposition, directional prediction, L1 norm, directional transform, wavelet |
Abstract | A new model of decomposing an image hierarchically into direction-adaptive subbands using pixel-wise direction estimation is presented. For each decomposing operation, an input image is divided into two parts: a base image subsampled from the input image and subband components. The subband components consist of residuals of estimating the pixels skipped through the subsampling, which ensures the invertibility of the decomposition. The estimation is performed in a direction-adaptive way, whose optimal direction is determined by a L1 norm criterion for each pixel, aiming to achieve good energy compaction that is suitable for image coding. Furthermore, since the L1 norms are obtained from the base image alone, we do not need to retain the directional information explicitly, which is another advantage of our model. Experimental results show that the proposed model can achieve lower entropy than conventional Haar or D5/3 discrete wavelet transform in case of lossless coding. |
Title | A Robust Video Super-Resolution Algorithm |
Author | Xinfeng Zhang (Institute of Computing Technology, Chinese Academy of Sciences, China), Ruiqin Xiong, Siwei Ma, Wen Gao (School of Electronic Engineering and Computer Science, Peking University, China) |
Page | pp. 574 - 577 |
Keyword | super-resolution, kernel regression, irregular interpolation |
Abstract | In this paper, we proposed a robust video super-resolution reconstruction method based on spatial-temporal orienta-tion-adaptive kernel regression. First, we proposed a robust registration efficiency model to reflect the temporal information reliability. Second, we proposed a spatial-temporal steering kernel considering motions between frames and structures in each low resolution frame. Simula-tion results demonstrate that our new super-resolution method substantially improves both the subjective quality and objective quality than other resolution enhancement methods. |
Title | An Efficient Method for the Detection of Ringing Artifacts and De-Ringing in JPEG Image |
Author | Shen-Chuan Tai, Bo-Jhih Chen, Mankit Choi (Institute of Computer and Communication Engineering, Department of Electrical Engineering, National Cheng Kung University, Taiwan) |
Page | pp. 578 - 581 |
Keyword | JPEG, ringing artifacts, image compression |
Abstract | JPEG standard is commonly used for the still image compression.
However, the DCT-based coding of JPEG is one
of the lossy compression tools and introduces most artifacts
in the decompressed image such as blocking and ringing artifacts.
In this paper, our proposed method focuses on how to efficiently detect the blocks which cause ringing artifacts. Moreover, these ringing blocks will be further filtered and the texture region will be able to preserve as well as the smooth region. Simulation results show that our proposed method can outperform the related algorithms in subjectively and objectively. By comparing JPEG compressed image, the decompressed image using our algorithm can achieve the better PSNR as well as the visual performance, especially at the lower quality coding (the higher compression rate). |
Title | Low Delay Distributed Video Coding Using Data Hiding |
Author | Krishna Rao Vijayanagar, Bowen Dan, Joohee Kim (Illinois Institute of Technology, U.S.A.) |
Page | pp. 582 - 585 |
Keyword | Distributed Video Coding, Data hiding, Low Delay DVC |
Abstract | Distributed Video Coding (DVC) is a popular topic in the research community and the past years have seen several different implementations. DVC has been proposed as a solution for applications that have limited battery resources and low hardware complexity, thus necessitating a low complexity encoder. An ideal application would be in remote surveillance/monitoring or live video conferencing. However, current solutions use iteratively decodable channel codes like LDPCA or Turbo codes that have large latencies. In order to make real-time communication possible. The proposed architecture makes efficient use of Skip blocks to reduce the bitrate, eliminates the iterative decoding nature of the Wyner-Ziv (WZ) channel and uses a simple data-hiding based compression algorithm. This drastically cuts down on the time complexity of the decoding procedure while still maintaining an rate-distortion performance better than that of H.264/AVC Intra coding and other current DVC solutions. |
Title | FFT-Based Full-Search Block Matching Using Overlap-Add Method |
Author | Hidetake Sasaki, Zhen Li, Hitoshi Kiya (Tokyo Metropolitan University, Japan) |
Page | pp. 586 - 589 |
Keyword | block matching, FFT, overlap-add method, pattern recognition, motion estimation |
Abstract | One category of fast full-search block matching algorithms (BMAs) is based on the fast Fourier transformation (FFT). In conventional methods in this category, the macroblock size must be adjusted to the search window size by zero-padding. In these methods, the memory consumption and computational complexity heavily depend on the size difference between the macroblock and the search window. Thus, we propose a novel FFT-based BMA to solve this problem. The proposed method divides the search window into multiple sub search windows to versatilely control the difference between the macroblock and the search window sizes. Simulation results show the effectiveness of the proposed method. |
Title | Temporal Inconsistency Measure for Video Quality Assessment |
Author | Songnan Li, Lin Ma, Fan Zhang, King Ngi Ngan (The Chinese University of Hong Kong, Hong Kong) |
Page | pp. 590 - 593 |
Keyword | video quality assessment, spatial visual quality measure, temporal inconsistency measure |
Abstract | Visual quality assessment plays a crucial role in many vision related signal processing applications. In the literature, more efforts have been spent on spatial visual quality measure. Although a large number of video quality metrics have been proposed, the methods to use temporal information for quality assessment are less diversified. In this paper, we propose a novel method to measure the temporal impairments. The proposed method can be incorporated into any image quality metric to extend it into a video quality metric. Moreover, it is easy to apply the proposed method in video coding system to incorporate with MSE for rate-distortion optimization. |
Title | The Dependence of Visual Noise Perception on Background Color and Luminance |
Author | Makoto Shohara, Kazunori Kotani (Japan Advanced Institute of Science and Technology, Japan) |
Page | pp. 594 - 597 |
Keyword | Visual system, Noise measurement, Color measurement, Noise generators, Shot noise |
Abstract | This paper describes the dependency of noise perception on background color and luminance of noise quantitatively. We use the luminance and chromatic noise models derived from shot noise model. And we conduct subjective and quantitative experiments using a modified gray scale method. The subjective experiment results show the perceived color noise depends on the background color, but the perceived luminance noise does not. In addition, the perceived chromatic noise level is about 8 times smaller than calculated color noise. |
Title | An Adaptive Low-Complexity Global Motion Estimation Algorithm |
Author | Md Nazmul Haque, Moyuresh Biswas, Mark R. Pickering, Michael R. Frater (The University of New South Wales, Australia) |
Page | pp. 598 - 601 |
Keyword | global motion estimation, video coding, image registration, gradient-descent optimization |
Abstract | A limitation of current global motion estimation approaches is the additional complexity of the gradient-descent optimization that is typically required to calculate the optimal set of global motion parameters. In this paper we propose a new low-complexity algorithm for global motion estimation. The complexity of the proposed algorithm is reduced by performing the majority of the operations in the gradient-descent optimization using logic operations rather than full-precision arithmetic operations.
This use of logic operations means that the algorithm can be implemented much more easily in hardware platforms such as field programmable gate arrays (FPGAs). Experimental results show that the execution time for software implementations of the new algorithm is reduced by a factor of almost four when compared to existing fast implementations without any significant loss in registration accuracy. |
Title | Bit-Plane Compressive Sensing with Bayesian Decoding for Lossy Compression |
Author | Sz-Hsien Wu (Electronic Engineering, National Chiao Tung University, Taiwan), Wen-Hsiao Peng (Computer Science, National Chiao Tung University, Taiwan), Tihao Chiang (Electronic Engineering, National Chiao Tung University, Taiwan) |
Page | pp. 606 - 609 |
Keyword | Compressive Sensing, Bayesian estimation, Bit-plane |
Abstract | This paper addresses the problem of reconstructing a compressively sampled sparse signal from its lossy and possibly insufficient measurements. The process involves estimations of sparsity pattern and sparse representation, for which we derived a vector estimator based on the Maximum a Posteriori Probability (MAP) rule. By making full use of signal prior knowledge, our scheme can use a measurement number close to sparsity to achieve perfect reconstruction. It also shows a much lower error probability of sparse pattern than prior work, given insufficient measurements. To better recover the most significant part of the sparse representation, we further introduce the notion of bit-plane separation. When applied to image compression, the technique in combination with our MAP estimator shows promising results as compared to JPEG: the difference in compression ratio is seen to be within a factor of two, given the same decoded quality. |
Title | A Reduced-Reference Metric Based on the Interest Points in Color Images |
Author | Michael Nauge, Mohamed-Chaker Larabi, Christine Fernandez (University of Poitiers, France) |
Page | pp. 610 - 613 |
Keyword | Metric, Interest point, Reduced reference, quality, saliency |
Abstract | In the last decade, an important research effort has been dedicated to quality assessment from the subjective and the objective points of view. The focus was mainly on Full Reference (FR) metrics because of the ability to compare to an original. Only few works were oriented to Reduced Reference (RR) or No Reference (NR) metrics, very useful for applications where the original image is not available such as transmission or monitoring. In this work, we propose an RR metric based on two concepts, the interest points of the image and the objects saliency on color images. This metric needs a very low amount of data (lower than 8 bytes) to be able to compute the quality scores. The results show a high correlation between the metric scores and the human judgement and a better quality range than well-known metrics like PSNR or SSIM. Finally, interest points have shown that they can predict the quality of color images. |