Les Inscriptions à la Bibliothèque sont ouvertes en
ligne via le site: https://biblio.enp.edu.dz
Les Réinscriptions se font à :
• La Bibliothèque Annexe pour les étudiants en
2ème Année CPST
• La Bibliothèque Centrale pour les étudiants en Spécialités
A partir de cette page vous pouvez :
Retourner au premier écran avec les recherches... |
Signal processing. Image communication / European association for signal processing . Vol. 25 N° 4Signal processing. Image communicationMention de date : Avril 2010 Paru le : 16/09/2012 |
Dépouillements
Ajouter le résultat dans votre panierRequantization transcoding for H.264/AVC video coding / Jan De Cock in Signal processing. Image communication, Vol. 25 N° 4 (Avril 2010)
[article]
in Signal processing. Image communication > Vol. 25 N° 4 (Avril 2010) . - pp. 235–254
Titre : Requantization transcoding for H.264/AVC video coding Type de document : texte imprimé Auteurs : Jan De Cock, Auteur ; Stijn Notebaert, Auteur ; Peter Lambert, Auteur Année de publication : 2012 Article en page(s) : pp. 235–254 Note générale : Electronique Langues : Anglais (eng) Mots-clés : Transcoding Requantization H.264/AVC Drift propagation Drift compensation Résumé : In this paper, efficient solutions for requantization transcoding in H.264/AVC are presented. By requantizing residual coefficients in the bitstream, different error components can appear in the transcoded video stream. Firstly, a requantization error is present due to successive quantization in encoder and transcoder. In addition to the requantization error, the loss of information caused by coarser quantization will propagate due to dependencies in the bitstream. Because of the use of intra prediction and motion-compensated prediction in H.264/AVC, both spatial and temporal drift propagation arise in transcoded H.264/AVC video streams. The spatial drift in intra-predicted blocks results from mismatches in the surrounding prediction pixels as a consequence of requantization. In this paper, both spatial and temporal drift components are analyzed. As is shown, spatial drift has a determining impact on the visual quality of transcoded video streams in H.264/AVC. In particular, this type of drift results in serious distortion and disturbing artifacts in the transcoded video stream. In order to avoid the spatially propagating distortion, we introduce transcoding architectures based on spatial compensation techniques. By combining the individual temporal and spatial compensation approaches and applying different techniques based on the picture and/or macroblock type, overall architectures are obtained that provide a trade-off between computational complexity and rate-distortion performance. The complexity of the presented architectures is significantly reduced when compared to cascaded decoder–encoder solutions, which are typically used for H.264/AVC transcoding. The reduction in complexity is particularly large for the solution which uses spatial compensation only. When compared to traditional solutions without spatial compensation, both visual and objective quality results are highly improved. ISSN : 0923-5965 En ligne : http://www.sciencedirect.com/science/article/pii/S092359651000007X [article] Requantization transcoding for H.264/AVC video coding [texte imprimé] / Jan De Cock, Auteur ; Stijn Notebaert, Auteur ; Peter Lambert, Auteur . - 2012 . - pp. 235–254.
Electronique
Langues : Anglais (eng)
in Signal processing. Image communication > Vol. 25 N° 4 (Avril 2010) . - pp. 235–254
Mots-clés : Transcoding Requantization H.264/AVC Drift propagation Drift compensation Résumé : In this paper, efficient solutions for requantization transcoding in H.264/AVC are presented. By requantizing residual coefficients in the bitstream, different error components can appear in the transcoded video stream. Firstly, a requantization error is present due to successive quantization in encoder and transcoder. In addition to the requantization error, the loss of information caused by coarser quantization will propagate due to dependencies in the bitstream. Because of the use of intra prediction and motion-compensated prediction in H.264/AVC, both spatial and temporal drift propagation arise in transcoded H.264/AVC video streams. The spatial drift in intra-predicted blocks results from mismatches in the surrounding prediction pixels as a consequence of requantization. In this paper, both spatial and temporal drift components are analyzed. As is shown, spatial drift has a determining impact on the visual quality of transcoded video streams in H.264/AVC. In particular, this type of drift results in serious distortion and disturbing artifacts in the transcoded video stream. In order to avoid the spatially propagating distortion, we introduce transcoding architectures based on spatial compensation techniques. By combining the individual temporal and spatial compensation approaches and applying different techniques based on the picture and/or macroblock type, overall architectures are obtained that provide a trade-off between computational complexity and rate-distortion performance. The complexity of the presented architectures is significantly reduced when compared to cascaded decoder–encoder solutions, which are typically used for H.264/AVC transcoding. The reduction in complexity is particularly large for the solution which uses spatial compensation only. When compared to traditional solutions without spatial compensation, both visual and objective quality results are highly improved. ISSN : 0923-5965 En ligne : http://www.sciencedirect.com/science/article/pii/S092359651000007X Variable block-based deblocking filter for H.264/AVC on low-end and low-bit rates terminals / Seung-Ho Shin in Signal processing. Image communication, Vol. 25 N° 4 (Avril 2010)
[article]
in Signal processing. Image communication > Vol. 25 N° 4 (Avril 2010) . - pp. 255–267
Titre : Variable block-based deblocking filter for H.264/AVC on low-end and low-bit rates terminals Type de document : texte imprimé Auteurs : Seung-Ho Shin, Auteur ; Kyu-Ho Park, Auteur ; Young-Joon Chai, Auteur Année de publication : 2012 Article en page(s) : pp. 255–267 Note générale : Electronique Langues : Anglais (eng) Mots-clés : H.264 AVC Deblocking filter Motion compensation Variable blocks Résumé : H.264/AVC supports variable block motion compensation, multiple reference frames, 1/4-pixel motion vector accuracy, and in-loop deblocking filter, compared with previous video coding standards. While these coding techniques are major functions for video compression improvement, they lead to high computational complexity at the same time. For the H.264 video coding techniques to be actually applied on low-end/low-bit rates terminals more extensively, it is essential to improve the coding efficiency. Currently the H.264 deblocking filter, which can improve the subjective quality of video, is hardly used on low-end terminals due to computational complexity.
In this paper, we propose an enhanced method of deblocking filter that efficiently reduces the blocking artifacts occurring during the low-bit rates video coding. In the ‘variable block-based deblocking filter (VBDF)’ proposed in this paper, the temporal and spatial characteristics of moving pictures are extracted using the variable block-size information of motion compensation, the filter mode is classified into four different modes according to the moving-picture characteristics, and the adaptive filtering is executed in the separate modes. The proposed VBDF can reduce the blocking artifacts, prevent excessive blurring effects, and achieve about 30–40% computational speedup at about the same PSNR compared with the existing methods.ISSN : 0923-5965 En ligne : http://www.sciencedirect.com/science/article/pii/S0923596510000068 [article] Variable block-based deblocking filter for H.264/AVC on low-end and low-bit rates terminals [texte imprimé] / Seung-Ho Shin, Auteur ; Kyu-Ho Park, Auteur ; Young-Joon Chai, Auteur . - 2012 . - pp. 255–267.
Electronique
Langues : Anglais (eng)
in Signal processing. Image communication > Vol. 25 N° 4 (Avril 2010) . - pp. 255–267
Mots-clés : H.264 AVC Deblocking filter Motion compensation Variable blocks Résumé : H.264/AVC supports variable block motion compensation, multiple reference frames, 1/4-pixel motion vector accuracy, and in-loop deblocking filter, compared with previous video coding standards. While these coding techniques are major functions for video compression improvement, they lead to high computational complexity at the same time. For the H.264 video coding techniques to be actually applied on low-end/low-bit rates terminals more extensively, it is essential to improve the coding efficiency. Currently the H.264 deblocking filter, which can improve the subjective quality of video, is hardly used on low-end terminals due to computational complexity.
In this paper, we propose an enhanced method of deblocking filter that efficiently reduces the blocking artifacts occurring during the low-bit rates video coding. In the ‘variable block-based deblocking filter (VBDF)’ proposed in this paper, the temporal and spatial characteristics of moving pictures are extracted using the variable block-size information of motion compensation, the filter mode is classified into four different modes according to the moving-picture characteristics, and the adaptive filtering is executed in the separate modes. The proposed VBDF can reduce the blocking artifacts, prevent excessive blurring effects, and achieve about 30–40% computational speedup at about the same PSNR compared with the existing methods.ISSN : 0923-5965 En ligne : http://www.sciencedirect.com/science/article/pii/S0923596510000068 Sub-pixel motion estimation using kernel methods / P.R. Hill in Signal processing. Image communication, Vol. 25 N° 4 (Avril 2010)
[article]
in Signal processing. Image communication > Vol. 25 N° 4 (Avril 2010) . - pp. 268–275
Titre : Sub-pixel motion estimation using kernel methods Type de document : texte imprimé Auteurs : P.R. Hill, Auteur ; D. R. Bull, Auteur Année de publication : 2012 Article en page(s) : pp. 268–275 Note générale : Electronique Langues : Anglais (eng) Mots-clés : Video compression Kernel methods Motion estimation Résumé : Modern video codecs such as MPEG2, MPEG4-ASP and H.264 depend on sub-pixel motion estimation to optimise rate-distortion efficiency. Sub-pixel motion estimation is implemented within these standards using interpolated values at 1/2 or 1/4 pixel accuracy. By using these interpolated values, the residual energy for each predicted macroblock is reduced. However this leads to a significant increase in complexity at the encoder, especially for H.264, where the cost of an exhaustive set of macroblock segmentations needs to be estimated for optimal mode selection. This paper presents a novel scheme for sub-pixel motion estimation based on the whole-pixel SAD distribution. Both half-pixel and quarter-pixel searches are guided by a model-free estimation of the SAD surface using a two dimensional kernel method. While giving an equivalent rate distortion performance, this approach approximately halves the number of quarter-pixel search positions giving an overall speed up of approximately 10% compared to the EPZS quarter-pixel method (the state of the art H.264 optimised sub-pixel motion estimator). ISSN : 0923-5965 En ligne : http://www.sciencedirect.com/science/article/pii/S0923596510000044 [article] Sub-pixel motion estimation using kernel methods [texte imprimé] / P.R. Hill, Auteur ; D. R. Bull, Auteur . - 2012 . - pp. 268–275.
Electronique
Langues : Anglais (eng)
in Signal processing. Image communication > Vol. 25 N° 4 (Avril 2010) . - pp. 268–275
Mots-clés : Video compression Kernel methods Motion estimation Résumé : Modern video codecs such as MPEG2, MPEG4-ASP and H.264 depend on sub-pixel motion estimation to optimise rate-distortion efficiency. Sub-pixel motion estimation is implemented within these standards using interpolated values at 1/2 or 1/4 pixel accuracy. By using these interpolated values, the residual energy for each predicted macroblock is reduced. However this leads to a significant increase in complexity at the encoder, especially for H.264, where the cost of an exhaustive set of macroblock segmentations needs to be estimated for optimal mode selection. This paper presents a novel scheme for sub-pixel motion estimation based on the whole-pixel SAD distribution. Both half-pixel and quarter-pixel searches are guided by a model-free estimation of the SAD surface using a two dimensional kernel method. While giving an equivalent rate distortion performance, this approach approximately halves the number of quarter-pixel search positions giving an overall speed up of approximately 10% compared to the EPZS quarter-pixel method (the state of the art H.264 optimised sub-pixel motion estimator). ISSN : 0923-5965 En ligne : http://www.sciencedirect.com/science/article/pii/S0923596510000044 Rate-distortion optimization of scalable video codecs / Hoda Roodaki in Signal processing. Image communication, Vol. 25 N° 4 (Avril 2010)
[article]
in Signal processing. Image communication > Vol. 25 N° 4 (Avril 2010) . - pp. 276–286
Titre : Rate-distortion optimization of scalable video codecs Type de document : texte imprimé Auteurs : Hoda Roodaki, Auteur ; Tonko Ćurko, Auteur ; Mohammad Ghanbari, Auteur Année de publication : 2012 Article en page(s) : pp. 276–286 Note générale : Electronique Langues : Anglais (eng) Mots-clés : Rate-distortion optimization Scalable video coding Résumé : In this paper joint optimization of layers in the layered video coding is investigated. Through theoretical analysis and simulations, it is shown that, due to higher interactions between the layers in a SNR scalable codec, this type of layering technique benefits most from joint optimization of the layers. A method for joint optimization is then proposed, and its compression efficiency is contrasted against the separate optimization and an optimized single layer coder. It is shown that, in joint optimization of SNR scalable coders when the quantization step size of the enhancement layer is larger than half the step size of the base layer, an additional improvement is gained by not sending the enhancement zero valued quantized coefficients, provided they are quantized at the base-layer. This will result in a non-standard bitstream syntax and as an alternative for standard syntax, one may skip the inter coded enhancement macroblocks. Through extensive tests it is shown that while separate optimization of SNR coders is inferior to single layer coder by more than 2 dB, with joint optimization this gap is reduced to 0.3–0.5 dB. We have shown that through joint optimization quality of the base layer video is also improved over the separate optimization. It is also shown that spatial scalability like SNR scalability does benefit from joint optimization, though not being able to exploit the relation between the quantizer step sizes. The amount of improvement depends on the interpolation artifacts of upsampled base-layer and the residual quantization distortion of this layer. Hence, the degree of improvement depends on image contents as well as the bit rate budget. Simulation results show that joint optimization of spatial scalable coders is about 0.5–1 dB inferior to the single layer optimized coder, where its separate optimization counterpart like SNR scalability is more than 2 dB worse. ISSN : 0923-5965 En ligne : http://www.sciencedirect.com/science/article/pii/S0923596510000056 [article] Rate-distortion optimization of scalable video codecs [texte imprimé] / Hoda Roodaki, Auteur ; Tonko Ćurko, Auteur ; Mohammad Ghanbari, Auteur . - 2012 . - pp. 276–286.
Electronique
Langues : Anglais (eng)
in Signal processing. Image communication > Vol. 25 N° 4 (Avril 2010) . - pp. 276–286
Mots-clés : Rate-distortion optimization Scalable video coding Résumé : In this paper joint optimization of layers in the layered video coding is investigated. Through theoretical analysis and simulations, it is shown that, due to higher interactions between the layers in a SNR scalable codec, this type of layering technique benefits most from joint optimization of the layers. A method for joint optimization is then proposed, and its compression efficiency is contrasted against the separate optimization and an optimized single layer coder. It is shown that, in joint optimization of SNR scalable coders when the quantization step size of the enhancement layer is larger than half the step size of the base layer, an additional improvement is gained by not sending the enhancement zero valued quantized coefficients, provided they are quantized at the base-layer. This will result in a non-standard bitstream syntax and as an alternative for standard syntax, one may skip the inter coded enhancement macroblocks. Through extensive tests it is shown that while separate optimization of SNR coders is inferior to single layer coder by more than 2 dB, with joint optimization this gap is reduced to 0.3–0.5 dB. We have shown that through joint optimization quality of the base layer video is also improved over the separate optimization. It is also shown that spatial scalability like SNR scalability does benefit from joint optimization, though not being able to exploit the relation between the quantizer step sizes. The amount of improvement depends on the interpolation artifacts of upsampled base-layer and the residual quantization distortion of this layer. Hence, the degree of improvement depends on image contents as well as the bit rate budget. Simulation results show that joint optimization of spatial scalable coders is about 0.5–1 dB inferior to the single layer optimized coder, where its separate optimization counterpart like SNR scalability is more than 2 dB worse. ISSN : 0923-5965 En ligne : http://www.sciencedirect.com/science/article/pii/S0923596510000056 A semantic framework for video genre classification and event analysis / Junyong You in Signal processing. Image communication, Vol. 25 N° 4 (Avril 2010)
[article]
in Signal processing. Image communication > Vol. 25 N° 4 (Avril 2010) . - pp. 287–302
Titre : A semantic framework for video genre classification and event analysis Type de document : texte imprimé Auteurs : Junyong You, Auteur ; Guizhong Liu, Auteur ; Andrew Perkis, Auteur Année de publication : 2012 Article en page(s) : pp. 287–302 Note générale : Electronique Langues : Anglais (eng) Mots-clés : Semantic video analysis Probabilistic model Video genre classification Event analysis Résumé : Semantic video analysis is a key issue in digital video applications, including video retrieval, annotation, and management. Most existing work on semantic video analysis is mainly focused on event detection for specific video genres, while the genre classification is treated as another independent issue. In this paper, we present a semantic framework for weakly supervised video genre classification and event analysis jointly by using probabilistic models for MPEG video streams. Several computable semantic features that can accurately reflect the event attributes are derived. Based on an intensive analysis on the connection between video genres and the contextual relationship among events, as well as the statistical characteristics of dominant event, a hidden Markov model (HMM) and naive Bayesian classifier (NBC) based analysis algorithm is proposed for video genre classification. Another Gaussian mixture model (GMM) is built to detect the contained events using the same semantic features, whilst an event adjustment strategy is proposed according to an analysis on the GMM structure and pre-definition of video events. Subsequently, a special event is recognized based on the detected events by another HMM. The simulative experiments on video genre classification and event analysis using a large number of video data sets demonstrate the promising performance of the proposed framework for semantic video analysis. ISSN : 0923-5965 En ligne : http://www.sciencedirect.com/science/article/pii/S0923596510000172 [article] A semantic framework for video genre classification and event analysis [texte imprimé] / Junyong You, Auteur ; Guizhong Liu, Auteur ; Andrew Perkis, Auteur . - 2012 . - pp. 287–302.
Electronique
Langues : Anglais (eng)
in Signal processing. Image communication > Vol. 25 N° 4 (Avril 2010) . - pp. 287–302
Mots-clés : Semantic video analysis Probabilistic model Video genre classification Event analysis Résumé : Semantic video analysis is a key issue in digital video applications, including video retrieval, annotation, and management. Most existing work on semantic video analysis is mainly focused on event detection for specific video genres, while the genre classification is treated as another independent issue. In this paper, we present a semantic framework for weakly supervised video genre classification and event analysis jointly by using probabilistic models for MPEG video streams. Several computable semantic features that can accurately reflect the event attributes are derived. Based on an intensive analysis on the connection between video genres and the contextual relationship among events, as well as the statistical characteristics of dominant event, a hidden Markov model (HMM) and naive Bayesian classifier (NBC) based analysis algorithm is proposed for video genre classification. Another Gaussian mixture model (GMM) is built to detect the contained events using the same semantic features, whilst an event adjustment strategy is proposed according to an analysis on the GMM structure and pre-definition of video events. Subsequently, a special event is recognized based on the detected events by another HMM. The simulative experiments on video genre classification and event analysis using a large number of video data sets demonstrate the promising performance of the proposed framework for semantic video analysis. ISSN : 0923-5965 En ligne : http://www.sciencedirect.com/science/article/pii/S0923596510000172 Dynamic replacement of video coding elements / M. Bystrom in Signal processing. Image communication, Vol. 25 N° 4 (Avril 2010)
[article]
in Signal processing. Image communication > Vol. 25 N° 4 (Avril 2010) . - pp. 303–313
Titre : Dynamic replacement of video coding elements Type de document : texte imprimé Auteurs : M. Bystrom, Auteur ; I. Richardson, Auteur ; S. Kannangara, Auteur Année de publication : 2012 Article en page(s) : pp. 303–313 Note générale : Electronique Langues : Anglais (eng) Mots-clés : Video Video coding Decoding Reconfigurable coding Résumé : The long timescale between the development of new technologies for video coding and their adoption into standards results in a slow improvement in compression efficiency despite the scale of ongoing research into new compression techniques. Standards-based codecs have limited capabilities to adapt to changes in video content, delivery environments, or platforms. There is a growing recognition, for example, with the MPEG Reconfigurable Video Coding initiative, that increased codec flexibility is needed. However, we anticipate that even further developments are required to address these stumbling blocks to video coder advancement. To this end, we present a new approach to video coding which enables flexible and dynamic re-configuration of video coding functions. This adaptability is achieved by sending configuration information to the decoder during a communications session as part of the compressed video signal. The decoder responds to this information by reconfiguring itself to adapt the video decoding process as prompted by the encoder. In this paper we describe a particular example of how dynamic re-configuration may be implemented in a simple video coding scenario, namely, a video coder is reconfigured dynamically by sending descriptions of new transforms during coding. We evaluate five approaches to re-configuration and show that all demonstrate rate-distortion gains over baseline coders, despite the rate increase due to sending configuration information. ISSN : 0923-5965 En ligne : http://www.sciencedirect.com/science/article/pii/S0923596510000329 [article] Dynamic replacement of video coding elements [texte imprimé] / M. Bystrom, Auteur ; I. Richardson, Auteur ; S. Kannangara, Auteur . - 2012 . - pp. 303–313.
Electronique
Langues : Anglais (eng)
in Signal processing. Image communication > Vol. 25 N° 4 (Avril 2010) . - pp. 303–313
Mots-clés : Video Video coding Decoding Reconfigurable coding Résumé : The long timescale between the development of new technologies for video coding and their adoption into standards results in a slow improvement in compression efficiency despite the scale of ongoing research into new compression techniques. Standards-based codecs have limited capabilities to adapt to changes in video content, delivery environments, or platforms. There is a growing recognition, for example, with the MPEG Reconfigurable Video Coding initiative, that increased codec flexibility is needed. However, we anticipate that even further developments are required to address these stumbling blocks to video coder advancement. To this end, we present a new approach to video coding which enables flexible and dynamic re-configuration of video coding functions. This adaptability is achieved by sending configuration information to the decoder during a communications session as part of the compressed video signal. The decoder responds to this information by reconfiguring itself to adapt the video decoding process as prompted by the encoder. In this paper we describe a particular example of how dynamic re-configuration may be implemented in a simple video coding scenario, namely, a video coder is reconfigured dynamically by sending descriptions of new transforms during coding. We evaluate five approaches to re-configuration and show that all demonstrate rate-distortion gains over baseline coders, despite the rate increase due to sending configuration information. ISSN : 0923-5965 En ligne : http://www.sciencedirect.com/science/article/pii/S0923596510000329
Exemplaires
Code-barres | Cote | Support | Localisation | Section | Disponibilité |
---|---|---|---|---|---|
aucun exemplaire |