Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Training xtts v2 with original dataset which is multilingual and multispeaker #3699

Open
OswaldoBornemann opened this issue Apr 18, 2024 · 8 comments
Labels
bug Something isn't working

Comments

@OswaldoBornemann
Copy link

OswaldoBornemann commented Apr 18, 2024

Can we train xtts v2 with original dataset which is multilingual and multispeaker?

@OswaldoBornemann OswaldoBornemann added the bug Something isn't working label Apr 18, 2024
@OswaldoBornemann
Copy link
Author

OswaldoBornemann commented Apr 19, 2024

@erogol So I tried to train xtts v2 with multi speaker in Chinese. The evaluation loss seems unnormal.

Screenshot 2024-04-19 at 09 47 37

@Thomcle
Copy link

Thomcle commented May 1, 2024

I am also trying to train XTTS GPT model as a beginner. The documentation suggest that we can only train the model for cloning a single voice. My question is : can we train XTTS on a multilingual and multispeaker dataset because I would like to improve the general model quality in 3 differents languages (Spanish, Italian and German).

I know this isn't the best place to ask this question, but I know that you encountered the same problem.

@OswaldoBornemann
Copy link
Author

@Thomcle So for now, I don't think xtts v2 support this mechanism, which allows for training with multispeaker when we set the speaker name with the audio name. I have tried this but the inference performance is not stable.

@smallsudarshan
Copy link

We are also trying to do this. I don't see why this is not possible theoretically if the dataset quality is good. I think what is important is that the model sees a mixture of various languages during training i.e. one minibatch language A, then language B and so on.

I think the solution would look something like changing this:

config_dataset = BaseDatasetConfig(
    formatter="ljspeech",
    dataset_name="ljspeech",
    path="/raid/datasets/LJSpeech-1.1_24khz/",
    meta_file_train="/raid/datasets/LJSpeech-1.1_24khz/metadata.csv",
    language="en",
)

to this:

config_dataset = BaseDatasetConfig(
    formatter="ljspeech",
    dataset_name="ljspeech",
    path="/raid/datasets/LJSpeech-1.1_24khz/",
    meta_file_train="/raid/datasets/LJSpeech-1.1_24khz/metadata.csv",
    language='auto'
)

and when language = 'auto' there is a way to detect which language it is while loading the dataset. I think there are many libraries which do this.

Some additional logic might be needed if we want to smartly make sure one minibatch has only one language. Although I am not sure how important that is. We should train and find out it.

Once the language is recognized and converted to tokens, the rest of the process is the same and should need no change.

@smallsudarshan
Copy link

smallsudarshan commented May 2, 2024

One more solution that we are trying (training will start and loss will reduce without errors):

Go to this file:
/TTS/TTS/tts/datasets/__init__.py

def add_extra_keys(metadata, language, dataset_name):
    for item in metadata:
        # add language name
        language = langid.classify(item['text'])[0]
        if language!='en':
            language = 'hi'
        item["language"] = language
        # add unique audio name
        relfilepath = os.path.splitext(os.path.relpath(item["audio_file"], item["root_path"]))[0]
        audio_unique_name = f"{dataset_name}#{relfilepath}"
        item["audio_unique_name"] = audio_unique_name
    return metadata

modify the function to something like this such that item['language'] is set by a language detection model or some custom logic instead of by the parameter you give during training.

Training is running currently, I will share the results here regardless of good/bad. Loss seems to have reduced significantly though.

@OswaldoBornemann
Copy link
Author

OswaldoBornemann commented May 2, 2024

@smallsudarshan So have you change the ljspeech formatter as well? Because in the ljspeech formatted, it will set the speaker name for all audios to ljspeech by default. But this is not correct for the multi-speaker scenario. What do you think?

@smallsudarshan
Copy link

smallsudarshan commented May 2, 2024

Hey @OswaldoBornemann I have not actually. I was going through the code and I think the speaker name is not being used anywhere for training. If you think it is, please let me know.

It is being used in the split_dataset function here TTS/tts/datasets/__init__.py however, and you might get slightly better eval metrics if you do this.

But my dataset is well balanced for speakers at the moment. So I have not added this.

@smallsudarshan
Copy link

smallsudarshan commented May 2, 2024

After 5 epochs on total around 12 thousand samples of varying sizes of 2 speakers (did not pre-process too much i.e. no creating gaussian distribution of text lengths, accents etc.)

Train loss:
train-loss
Eval loss:
evalloss

Here are few samples in English and Hindi. Seems to do a decent job given the data. For eg. my hindi audios have a very strong assamese (a particular place in India) accent that it has picked up. And the quality of the audios is very close to the data I have trained on.

According to my experience:

  1. Quality of data in terms of - clarity, consistency and labelling punctuations matters a lot more than anything else
  2. Larger data diversity will ensure more robust results for various speaker references. Else some speaker references might be stable, others might not be.

Also it is able to produce audio for sentences where both Hindi and English are mixed in the text, which is often the case. Although I have not explicitly trained on such sentences.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants