Open Speech and Language Resources

Deeply parent-child vocal interaction dataset

Identifier: SLR98

Summary: The interaction of pairs of parent and child(reading fairy tales, singing children’s songs, conversing, and others).Recorded in 3 types of places, at 3 distinct distances, with 2 types of smartphone.

Category: Speech

License: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)

Downloads (use a mirror closer to you):
Parent-ChildVocalInteraction.tar.gz [254M]   ( Parent-child vocal interaction dataset )   Mirrors: [US]   [EU]   [CN]  

About this resource:

  • Recording environment: Studio apartment(moderate reverb), Dance studio(high reverb), Anechoic chamber(no reverb)
  • Device: iPhone x, Samsung Galaxy S7
  • Recording distance from the source: 0.4m, 2.0m, 4.0m
  • Volume(full set): ~16(~282) hours, ~20,000(~360,000) utterances, ~2(~110) GB
  • Format: 16kHz, 16-bit, mono
  • Language: Korean

The interaction of pairs of parent and child, such as reading fairy tales, singing childre's songs, conversing, and others, is recorded.
The recordings took place in 3 different types of places, which are an anechoic chamber, studio apartment, and dance studio, of which the level of reverberation differs.
And, in order to examine the effect of the distance of mic from the source and device, every experiment is recorded at 3 distinct distances with 2 types of smartphone, iPhone X, and Galaxy S7.

The dataset is a subset(approximately 1%) of a much bigger dataset which were recorded under the same environment as this public dataset.
This sample dataset only contains only a limited amount which contains a single pair of speaker recorded in 3 places, but only at single distance.
Please visit our website Deeply Inc., GitHub, or contact us for more details and access to the full set of the dataset with commercial license.

Deeply makes products that anyone can use with audio AI technology and makes people's lives happier with those products. For more products and services, please visit Deeply Inc..



                {'sub30040a00000': {'wavfile': 'sub3004_2020_11_29_01_35_0_0_0.wav',
                                    'label': 2,
                                    'subjectID': 'sub3004',
                                    'speaker': 'a',
                                    'age': 39,
                                    'sex': 0,
                                    'noise': 0,
                                    'location': 0,
                                    'distance': 0,
                                    'device': 0,
                                    'rms': 0.005859313067048788,
                                    'length': 1.521},
Label     : {speaker a(parent): {0: singing, 1: reading, 2: other utterances},  
             speaker b(child) : {0: singing, 1: reading, 2: crying, 3: refusing, 4: other utterances}}  
Subject ID: Unique 'sub + 4-digit' key allocated to each subject group  
Speaker   : unique key allocated to each indiivdual in the subject group.  
Sex       : {0: Female, 1: Male}  
Noise     : {0: Noiseless, 1: Indoor noise, 2: Outdoor noise, 3: Both indoor/outdoor noise}  
Location  : {0: Studio apartment, 1: Dance studio, 2: Anechoic chamber}  
Distance  : {0: 0.4m, 1: 2.0m, 2: 4.0m}
Device    : {0: iPhone X, 1: Galaxy S7}  
Rms       : Root mean square value of the signal  
Length    : length of the signal in secods  

* In polyphonic utterances, the categories, such as label, speaker,and sex, are longer than usual, because we've written the information of both speaker a and b in the same category.
  For example, if speaker a(parent, male, 35 yo) sings and speaker b(child, female, 3 yo) trys to talk in a single audio file, *speaker* woudl be 'ab', *sex* would be '10'(male(speake a),female(speaker b)), and *label* would be'04'(singing(speaker a), other utterances(speaker b)) .

You can cite the data as follows:
  title={{Deeply parent-child vocal interaction dataset}},
  author={Deeply Inc.},

Tel: (+82) 70-7459-0704