Jekyll2023-07-05T11:17:34+00:00https://vqeg.github.io/software-tools/feed.xmlVQEG Tools and Subjective Labs SetupSoftware tools and guidance for the research community.VQEGDatasheet for Subjective and Objective Quality Assessment Datasets2023-07-05T00:00:00+00:002023-07-05T00:00:00+00:00https://vqeg.github.io/software-tools/helper%20tools/datasheet-for-qoe-datasets<p>Over the years, many subjective and objective quality assessment datasets have been created and made available to the research community. However, there is no standard process for documenting the various aspects of the dataset, such as details about the source sequences, number of test subjects, test methodology, encoding settings, etc. Such information is often of great importance to the end-user of the dataset as it can help them get a quick understanding of the motivation and scope of the dataset. Without such a template, it is left to each reader to collate the information from the relevant publication or website, which is a tedious and time-consuming process. In some cases, the absence of a template to guide the documentation process can result in an unintentional omission of some important information.</p>
<p>We address this simple but significant gap by proposing a datasheet template for documenting various aspects of subjective and objective quality assessment datasets for multimedia data. The template presented in this work aim to simplify the documentation process for existing and new datasets and improve their reproducibility.</p>
<p>More details can be found in the paper <a href="https://drive.google.com/file/d/1E1C8sWk-IYGCmqRRv6tclmtNgtWrhM-8/view?usp=sharing">here</a>.
The poster presented in QoMEX’23 is available <a href="https://drive.google.com/file/d/1Z0TYObfiS8Jy3a_UExPRax1ERtZJG6eB/view?usp=sharing">here.</a></p>["Dr Nabajeet Barman, Kingston University, London, United Kingdom (n.barman@ieee.org, nabajeetbarman4@gmail.com)", "Dr Yuriy Reznik, Brightcove Inc, Seattle, USA (yreznik@brightcove.com)", "Prof Maria Martini, Kingston University, London, United Kingdom (m.martini@kingston.ac.uk)"]A datasheet template to document the various aspects of QoE datasets.Video quality metrics toolkit (VQMTK)2023-06-27T00:00:00+00:002023-06-27T00:00:00+00:00https://vqeg.github.io/software-tools/quality%20analysis/vqmtk<h2 id="video-quality-metrics-toolkit-an-open-source-software-to-assess-video-quality">Video quality metrics toolkit: An open source software to assess video quality</h2>
<p>Video content on the Internet continues to grow. As a result, streaming platforms must ensure a certain level of quality when preparing their content. To this end, several metrics have been developed by the research community to evaluate video quality. This work integrates 14 video metrics and the SI-TI indicators into a container image to create a cross-platform tool, VQMTK. The tool offers a web interface and a Bash script that combines all metrics into a single tool. Performance tests have demonstrated that the tool is capable of handling all the integrated metrics using 4K video samples. The tool can be used in scientific and educational environments.</p>
<p>The repository describes a container that includes the artifacts to compute 14 video quality metrics and the SI/TI indicators. The container includes Jupyter notebooks to show how to compute each metric. Besides, a command line interface script is included. This script allows the computation of a combination of metrics and can be integrated in any processing pipeline.</p>
<ul>
<li>VQMTK is an open source project that is available on <a href="https://github.com/cloudmedialab-uv/vqmtk">GitHub</a>.</li>
<li>The paper can be found at: <a href="https://doi.org/10.1016/j.softx.2023.101427">https://doi.org/10.1016/j.softx.2023.101427</a>.</li>
</ul>Wilmer Moina-Rivera, Juan Gutiérrez-Aguado and Miguel Garcia-PinedaThis work integrates 14 video metrics and the SI-TI indicators into a container image to create a cross-platform tool, VQMTK.VQEG Image Quality Evaluation Tool (VIQET)2022-01-10T00:00:00+00:002022-01-10T00:00:00+00:00https://vqeg.github.io/software-tools/quality%20analysis/viqet<p>The VQEG Image Quality Evaluation Tool (VIQET) is an objective, no-reference photo quality evaluation tool. VIQET is an open source tool designed to evaluate quality of consumer photos. In order to perform photo quality evaluation, VIQET requires a set of photos from the test device. It estimates an overall Mean Opinion Score (MOS) for a device based on the individual image MOS scores in the set.</p>
<ul>
<li>VIQET is an open source project that is available on <a href="https://www.GitHub.com/VIQET">GitHub</a>.</li>
<li>The desktop tool installer can be downloaded at: <a href="https://github.com/VIQET/VIQET-Desktop/releases">https://github.com/VIQET/VIQET-Desktop/releases</a></li>
<li>The source code can be found at: <a href="https://github.com/VIQET/VIQET-Desktop">https://github.com/VIQET/VIQET-Desktop</a></li>
</ul>unknownObjective, no-reference photo quality evaluation toolMATLAB Code for VQEG Multimedia2021-02-19T00:00:00+00:002021-02-19T00:00:00+00:00https://vqeg.github.io/software-tools/helper%20tools/matlab-analysis-ntia<p>The appendixes of publication contain MATLAB code that the VQEG ILG later used for the Multimedia official analyses:</p>
<ul>
<li>Appendix B.4 maps metric data to MOSs</li>
<li>Appendix B.9 computes whether two RMSEs are significantly different</li>
<li>Appendix B.3 maps individual subjective tests onto a single scale using a common set</li>
</ul>
<h2 id="paper-abstract">Paper Abstract</h2>
<p>This report presents techniques for evaluating objective video quality models using overlapping subjective data sets. The techniques are demonstrated using data from the Video Quality Experts Group (VQEG) Multi-Media (MM) Phase I experiments. These results also provide a supplemental analysis of the performance achieved by the objective models that were submitted to the MM Phase I experiments. The analysis presented herein uses the subjective scores from the common set of video clips to map all the subjective scores from the 13 or 14 experiments (at a given image resolution) onto a single subjective scale. This mapping greatly increases the available data and thus allows for more powerful analysis techniques to be performed. Resolving power values are presented for each model and image resolution. On a per-clip level, models’ responses to stimuli are analyzed with respect to all stimuli, each coding algorithm, coding-only impairments, and transmission error impairments. The models’ responses to stimuli are also analyzed on per-system and per-scene levels. Results indicate the amount of improvement possible when averaging over multiple scenes or systems.</p>Margaret Pinson, NTIAMaps metric data to MOSs, computes RMSE significance, map subjective tests to a common scaleNRMetricFramework2020-03-06T00:00:00+00:002020-03-06T00:00:00+00:00https://vqeg.github.io/software-tools/quality%20analysis/nr-metric-framework<p>NRMetricFramework is an open software framework for collaborative development of No Reference (NR) metrics for Image Quality Analysis (IQA) and Video Quality Analysis (VQA). This framework includes the support tools necessary to begin research and avoid common mistakes. The vision is a series of NR-VQA metrics that build upon each other to industry requirements for scope, accuracy, and capability. Documentation for this repository is provided in the Wiki [https://github.com/NTIA/NRMetricFramework/wiki].</p>
<p>This software was developed by employees of the National Telecommunications and Information Administration (NTIA), an agency of the Federal Government and is provided to you as a public service.</p>
<p>Please review the License terms.</p>
<h2 id="acknowledgements">Acknowledgements</h2>
<p>If you use this repository in your research or product development, please reference this GitHub repository and the paper listed below:</p>
<p>Margaret H. Pinson, Philip J. Corriveau, Miko?aj Leszczuk, and Michael Colligan, “Open Software Framework for Collaborative Development of No Reference Image and Video Quality Metrics,” Human Vision and Electronic Imaging (HVEI), Jan. 2020.</p>
<p>This software development effort was supported by the Public Safety Communications Research (PSCR) Division of the National Institute for Standards and Technology (NIST), an agency of the U.S. Department of Commerce (DOC).</p>
<p>This repository was inspired by discussions and work conducted in the Video Quality Experts Group (VQEG), especially the efforts of the No Reference Metrics (NORM) project and the Video and Image Models for consumer content Evaluation (VIME) project.</p>NTIAOpen software framework for collaborative development of no reference image and video quality metricsVQEGNumSubjTool2019-11-21T00:00:00+00:002019-11-21T00:00:00+00:00https://vqeg.github.io/software-tools/subjective%20test%20software/numsubjtool<p>Scripts and data for estimating the required number of test subjects for typical Quality of Experience experiments.</p>
<p>The app can be used interactively here:</p>
<p>https://slhck.shinyapps.io/number-of-subjects/</p>
<p>The calculations are based on knowing:</p>
<ul>
<li>the number of statistical t-test comparisons to be performed</li>
<li>the statistical significance level (alpha), typically 0.05</li>
<li>the desired power of the test (1 - Type II error probability), typically 0.8</li>
<li>the test conducted (paired or independent/two-sample), typically paired</li>
<li>the expected effect size (expected MOS difference divided by standard deviation), which is automatically calculated</li>
</ul>
<p>The result is the minimum number of subjects needed for the experiment in order to obtain enough data to perform statistical comparisons corrected for Type I errors.</p>Werner Robitza, Kjell BrunnströmNumber of Subjects Calculationstpkloss Packet Loss Tool2019-11-04T00:00:00+00:002019-11-04T00:00:00+00:00https://vqeg.github.io/software-tools/streaming/tpkloss<p>This software introduces losses to a pcap capture file using a 2-state or 4-state Markov model. The Markov models can either be parameterized in detail or through default values. For the use in subjective tests the tool has been extended such that it will prohibit either the start, the end of the capture file or both for X amounts of milliseconds from being impaired.</p>
<p>This software is provided at no cost for experimental use in lab environments and Telchemy makes no warranty with regard to its operation or to any issues that may arise from its use. Telchemy is not aware of any intellectual property issues that may result from the use of this software however makes no warranty with regard to patent infringement. Telchemy has made no IPR claims with regard to this software with the exception of the requirements contained in this header. The software may be modified, copied and made available to other parties however this header must be retained intact. The software may not be sold or incorporated into commercial applications.</p>TelchemyIntroduce losses to a PCAP capturesffmpeg-bitrate-stats2019-06-14T00:00:00+00:002019-06-14T00:00:00+00:00https://vqeg.github.io/software-tools/quality%20analysis/ffmpeg-bitrate-stats<p>Simple script for calculating bitrate statistics using FFmpeg.</p>
<p>Requirements:</p>
<ul>
<li>Python 3.6</li>
<li>FFmpeg:
<ul>
<li>download a static build from <a href="http://ffmpeg.org/download.html">their website</a></li>
<li>put the <code class="language-plaintext highlighter-rouge">ffprobe</code> executable in your <code class="language-plaintext highlighter-rouge">$PATH</code></li>
</ul>
</li>
<li><code class="language-plaintext highlighter-rouge">pip3 install -r requirements.txt</code></li>
</ul>
<p>Installation</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pip install ffmpeg_bitrate_stats
</code></pre></div></div>Werner RobitzaCalculate bitrate statistics using FFmpegffmpeg-debug-qp2019-06-14T00:00:00+00:002019-06-14T00:00:00+00:00https://vqeg.github.io/software-tools/quality%20analysis/ffmpeg-debug-qp<p>Prints QP values of input sequence on a per-frame basis.</p>
<p>Supported input:</p>
<ul>
<li>MPEG-2</li>
<li>MPEG-4 Part 2</li>
<li>H.264 / MPEG-4 Part 10 (AVC)</li>
</ul>
<p>Supported formats:</p>
<ul>
<li>MPEG-4 Part 14</li>
<li>H.264 Annex B bytestreams</li>
</ul>
<p>To run the tool:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>./ffmpeg_debug_qp test.mp4
</code></pre></div></div>
<p>The output will be as follows:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>...
[h264 @ 0x7fcf61813e00] nal_unit_type: X, nal_ref_idc: X
[h264 @ 0x7fcf61813e00] New frame, type: X
[h264 @ 0x7fcf61813e00] AABBCCDD...
</code></pre></div></div>
<p>Where in the above, AA is the QP value of the first macroblock, BB of the second, etc.
For every macroblock row, there will be another row printed per frame.</p>
<p>You can parse the values with the <code class="language-plaintext highlighter-rouge">parse-qp-output.py</code> script, e.g.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./ffmpeg-debug-qp test.mp4 2> qp-values.txt
$ ./parse-qp-output.py qp-values.txt qp-values.ldjson
</code></pre></div></div>
<p>This produces a newline-delimited JSON file that is easier to parse. Each line contains one frame.</p>Werner Robitza, Steve Göring, Pierre LebretonFFmpeg Debug Script for QP Valuesffmpeg-quality-metrics2019-06-14T00:00:00+00:002019-06-14T00:00:00+00:00https://vqeg.github.io/software-tools/quality%20analysis/ffmpeg-quality-metrics<p>Simple script for calculating quality metrics with FFmpeg.</p>
<p>Currently supports PSNR, SSIM and VMAF.</p>
<p>Requirements:</p>
<ul>
<li>Python 3.6</li>
<li>FFmpeg:
<ul>
<li>download a static build from <a href="http://ffmpeg.org/download.html">their website</a>)</li>
<li>put the <code class="language-plaintext highlighter-rouge">ffmpeg</code> executable in your <code class="language-plaintext highlighter-rouge">$PATH</code></li>
</ul>
</li>
</ul>
<p>Optionally, you may install FFmpeg with <code class="language-plaintext highlighter-rouge">libvmaf</code> support to run VMAF score calculation:</p>
<ul>
<li>Install <a href="https://brew.sh/">Homebrew</a></li>
<li>Install <a href="https://github.com/varenc/homebrew-ffmpeg/">this tap</a></li>
<li>Run <code class="language-plaintext highlighter-rouge">brew install ffmpeg --with-libvmaf</code>.</li>
</ul>
<p>Installation</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pip install ffmpeg_quality_metrics
</code></pre></div></div>Werner RobitzaCalculate quality metrics with FFmpeg (SSIM, PSNR, VMAF)