Tech-invite3GPPspaceIETFspace
21222324252627282931323334353637384‑5x

Content for  TR 26.996  Word version:  18.1.0

Top   Top   Up   Prev   None
0…   4…   6…   6.1.2…   7…   A…

 

A  ISAR Pdoc on Testing Aspects for Phase/Track 2/ap. 29

A.1  Overviewp. 29

This Annex contains the core part of the permanent document on Testing Aspects for the ISAR Phase/Track 2/a solution selection. The complete document is found for reference in the electronic attachment of this TR.

A.2  Organization of testsp. 29

A.2.1  Overview of the Selection Test Processp. 29

The selection tests of the IVAS specific ISAR solution will be organized as in-house tests by Solution Proponents Laboratories. The execution of these subjective tests is under the responsibility of the solution proponents participating in the selection and other volunteering organizations.
The selection test experiments will be duplicated and additionally run by suitable cross-checkers, Cross-check Listening Laboratories, with no stake in the candidate solutions under test.
The selection test results will be reported by the testing organizations to SA4 with suggested statistical result analysis. SA4 will review these analyses and, if found valid, confirm them to base its selection decision on them.
The processing of the selection test material is under the responsibility of the solution proponents participating in the selection. It is based on commonly available processing scripts, solution candidate executables, original sound material and head-tracker trajectories available to the solution proponents and other organizations who volunteer to carry out cross-checks.
Up

A.2.2  Responsibilitiesp. 29

A.2.2.1  Solution Proponent (SP)p. 29

The specific responsibilities of the SP are:
  • Make executables of solution candidate publicly available.
  • Develop common processing scripts using the condition lists defined in this document and the processing steps defined in the processing plan.
  • Process the test material using commonly available processing scripts, the shared solution candidate executables, original sound material and head-tracker trajectories.
  • Communicate with Volunteering Processing Cross-check Organizations to verify correct processing.
Up

A.2.2.2  Solution Proponents Laboratory (SPL)p. 29

  • Carry out selection tests according to the requirements of this test plan.
  • Carry out statistical result analysis according to the requirements of this test plan and provide test report including analysis to SA4.
  • Obligations as SPL:
    • The testing shall be caried out in a blinded fashion not revealing the conditions to the subjects.
    • No test subjects must be used that were actively involved in developing the split rendering features in the systems under test that are exposed by the experiments.
    • The test report shall describe how the listening lab ensured unbiased testing.
Up

A.2.2.3  Volunteering Cross-check Listening Laboratories (CLL)p. 30

  • Carry out selection tests according to the requirements of this test plan.
  • Carry out statistical result analysis according to the requirements of this test plan and provide test report including analysis to SA4.
  • Obligations as CLL:
    • The testing shall be caried out in a blinded fashion not revealing the conditions to the subjects.
    • The CLL shall not be contributor of the split rendering features in the systems under test that are exposed by the experiments.
    • The test report shall contain a statement confirming that the listening lab has met the obligations.
Up

A.2.2.4  Listening Laboratories (LL) (both SPL and CLL)p. 30

  • Provide a listening environment meeting the listening conditions for BS.1534 testing [12].

A.2.2.5  Volunteering Processing Cross-check Organizations (PCO)p. 30

  • Process the test material using commonly available processing scripts, the shared solution candidate executables, original sound material and head-tracker trajectories.
  • Communicate with Solution Proponent to verify correct processing.

A.2.2.6  SA4p. 30

  • Review selection test analyses received from LLs and determine their validity.
  • Selection of Candidate Solution according to selection rules for IVAS specific ISAR solutions targeted in Phase/Track 2/a of the ISAR Work Plan.

A.2.3  Statistical analysis of resultsp. 30

The statistical result analysis reports shall present the results of the Terms of Reference (ToR) tests using Student's Dependent Groups t-test (single-sided at 95% confidence level). Results of the Requirement ToR tests for each experiment shall be presented containing all relevant data allowing to verify proper execution of the Student's Dependent Groups t-test.
In the for Requirement ToR tests this should lead to the following indications:
  • Requirement ToR tests that are passed, (i.e., CuT "not worse than" Requirement) are indicated by CuT NWT Ref.
  • Requirement ToR tests that are exceeded, (i.e., CuT "better than" Requirement) are indicated by CuT BT Ref.
  • Requirement ToR tests that are failed (i.e., CuT "worse than" Requirement) are indicated by CuT WT Ref.
Up

A.3  Identitiesp. 30

A.3.1  Involved organizationsp. 30

In the following, the identities of the involved organizations are listed:
  • SP (proponent of CuT):
    • Proponent companies:
      Dolby Sweden AB, Ericsson LM, Fraunhofer IIS, Nokia Corporation, NTT, Orange, Panasonic Holdings Corporation, Philips International B.V., Qualcomm Incorporated, VoiceAge Corporation
    • Main contributor(s) to CuT whose split rendering features are exposed in the experiments:
      Dolby Sweden AB, Fraunhofer IIS
  • SPLs:
    • Dolby Sweden AB, Fraunhofer IIS
  • CLLs:
    • Qualcomm, Nokia, Bytedance, Ittiam
  • Processing Cross-check Organizations (PCO)
    • Fraunhofer IIS, Dolby
Up

A.3.2  LL assignmentp. 31

Experiment SPL CCL/other SPL
BS1534-1: SBA (HOA3)DolbyQualcomm
BS1534-2: Multi-channel 7.1+4Fraunhofer IISIttiam
BS1534-3: ObjectsFraunhofer IISNokia
BS1534-4: MASADolbyBytedance
Up

A.4  Information relevant to all Experimentsp. 31

A.4.1  General Technical Notesp. 31

Any and all deviations from the specifications contained in this document must be documented and submitted to SA4 along with the test reports.

A.4.2  General Consideration of Experimentsp. 31

A.4.2.1  Difference scenario between assumed and actual end-device posesp. 31

For the evaluation of ISAR split rendering solution, a primary focus should be the testing with relevant difference scenarios between assumed and actual end-device poses. To cover relevant cases, the used head-tracker trajectory files should be taken from the following categories:
  • Static within range: +-20 degrees
  • Dynamic within range: +-20 degrees
    • Sinusoidal: 0.25 Hz
    • Triangular: 0.5 Hz
  • Real, i.e., derived from real head tracker trajectories with movements giving rise to substantial differences (>15 degrees) between assumed and actual end-device poses and exposing the tested methods to a sufficient degree.
Up

A.4.2.2  DOFp. 32

Another aspect of interest is the ability to deal with differences of assumed and actual end-device poses around different axes, i.e., the number of degrees of freedom (1-3) which the candidate solutions can cope with. Accordingly, the head-tracker trajectory files shall cover the following DOF scenarios:
  • 1-DOF with pose deviations in yaw
  • 2-DOF with pose deviations in yaw and pitch
  • 3-DOF with pose deviations in yaw, pitch and roll

A.4.2.3  Rendering simulationp. 32

Two different rendering simulation methodologies shall be covered in the tests.
  • Trajectory nullification
    This simulation methodology is based on the concept that an immersive audio scene is pre-rotated prior to IVAS encoding while the head-tracked rendering compensates for the pre-rotation. In the ideal case and under certain conditions, this compensation can be perfect. This simulation methodology exposes the ability of a split rendering system to compensate for the pre-rotation despite the differences between assumed and actual end-device poses.
  • Unguided end-device pose
    This simulation methodology is based on rendering a decoded an immersive audio scene according to a given head-tracker trajectory. This simulation methodology exposes the ability of a split rendering system to follow the actual head-tracker trajectory despite the mere availability of the divergent assumed head-tracker trajectory available at the pre-renderer.
Up

A.4.2.4  Input formatsp. 32

According to the ISAR requirements, the tests shall cover the following IVAS codec input formats:
  • SBA (HOA3)
  • MASA (2 TCs)
  • Multi-channel (7.1.4)
  • Objects (ISM-4)

A.4.3  Methodologyp. 32

BS.1534 test methodologies shall be used in the ISAR selection tests. High-level configuration of the experiments is outlined below.
  • Number of items per experiment: 12
  • 10 experienced listeners
  • Total number of conditions: 4
  • Number of anchor conditions: 2
    • Native reference system
    • 7 kHz low-pass anchor

A.4.4  Head-tracker trajectoriesp. 33

A.4.4.1  Generalp. 33

All head-tracker trajectory files shall follow the convention imposed by the IVAS source code specification TS 26.258, i.e., shall be usable by the IVAS decoder/renderer.

A.4.4.2  Head-tracker trajectory categories and DOFp. 33

Head-tracker trajectories shall meet the above-defined category and DOF specifications.

A.4.4.3  Head-tracker trajectory availability and selectionp. 33

Head-tracker trajectories of the above-defined categories and DOF will be publicly collected from volunteering organizations. After collection and checking suitability, a list of available trajectories will be generated. The collection and checking will be done jointly by the involved organizations of the ISAR selection, i.e., the SP, the LLs and the PCOs.
In a second step, these organizations will jointly select up to 6 suitable trajectories for each test. In case this number of trajectories is not available, a smaller number is selected where a given trajectory may be reused across different tests or within a test.
The involved organizations will document their trajectory selection and assignment to tests for inclusion of this information into this document.
Up

A.4.5  Audio Materialp. 33

A.4.5.1  Generalp. 33

All audio material shall be sampled at 48 kHz with Full Band (FB) content and formatted as 16-bit little endian WAVE format files.

A.4.5.2  Audio categoriesp. 33

To cover a broad range of conceivable audio categories, the test items should be taken from the categories clean and noisy speech, music, critical audio. However, the ability of the system to deal with different audio categories is only a secondary focus of the selections tests and full coverage of these categories may not be possible.

A.4.5.3  Test Item availability and selectionp. 33

Audio material of the above categories has been collected as part of the IVAS codec selection phase. Details of this material are available in the IVAS test plan (see TR 26.997, Annex E). A subset of this material is either publicly available or at least available to the involved organizations of the ISAR selection, i.e., the SP, the LLs and the PCOs. These organizations will in a first step create a list of commonly available test items.
In a second step, these organizations will jointly select up to 12 suitable original test items for each test. In case this number of test items is not available, a smaller number is selected where a given test item may be reused across different tests or even within a test if the applied head-tracker trajectories, DOF or simulation methodology is different.
The involved organizations will document their test item selection and assignment to tests for inclusion of this information into this document.
Up

A.4.5.4  Training materialp. 33

No dedicated training material will be made available for use in a potential training phase in which the subjects may familiarize with the testing methodology and environment.
Such a training phase is voluntary and upon the own responsibility of the involved LLs. No items from the main tests shall be used for training. A training phase shall be executed as a separate short BS.1534 session.

A.4.6  Listening Systems and Listening Environmentsp. 34

The ISAR Selection Test will use the following listening systems:
  • High-quality stereo headphones for binaural listening, e.g.:
    • Sennheiser HD 650

$  Change historyp. 35


Up   Top