본문 바로가기

전체 글

(41)
[논문리뷰] GAN Vocoder: Multi-Resolution Discriminator Is All You Need (INTERSPEECH21) 논문제목: GAN Vocoder: Multi-Resolution Discriminator Is All You Need 저자: Jaeseong You, Dalhyun Kim, Gyuhyeon Nam, Geumbyeol Hwang, Gyeongsu Chae 소속: MoneyBrain Inc 발표: INTERSPEECH 2021 논문: https://arxiv.org/abs/2103.05236 오디오샘플: https://deepbrainai-research.github.io/gan-vocoder/ - 요즘 GAN을 사용한 보코더들이 이렇게 잘 되고 있는데 그 이유가 뭘까? - 혹시 multi-resolution discriminator를 사용하기 때문이 아닐까? - 이런저런 generator들을 만들어서 실험..
[논문리뷰] UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation (INTERSPEECH21) 논문제목: UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation 저자: Won Jang, Dan Lim, Jaesam Yoon, Bongwan Kim, Juntae Kim 소속: Kakao Enterprise Corporation 발표: INTERSPEECH 2021 논문: https://arxiv.org/abs/2106.07889 오디오샘플: https://kallavinka8045.github.io/is2021/ - 기존 연구에서는 over-smoothing문제 때문에 full-band spectral features를 사용하지 않는 경우가 많았음. 근데..
[논문리뷰] HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis (NeurIPS20) 논문제목: HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis 저자: Jungil Kong, Jaehyeon Kim, Jaekyoung Bae 소속: Kakao Enterprise 발표: NeurIPS 2020 논문: https://arxiv.org/abs/2010.05646 코드: https://github.com/jik876/hifi-gan 오디오샘플: https://jik876.github.io/hifi-gan-demo/ - GAN을 이용해서 멜을 고퀄 스피치로 바꾸는 기법 - 오디오의 기본 구성성분인 sinusoidal signal을 잘 모델링 하는 것이 핵심이었음 - 그렇게 하니 사람 ..
[논문리뷰] High Fidelity Speech Synthesis with Adversarial Networks (ICLR20) 논문제목: High Fidelity Speech Synthesis with Adversarial Networks 저자: Mikołaj Binkowski, Jeff Donahue, Sander Dieleman, Aidan Clark, Erich Elsen, Norman Casagrande, Luis C. Cobo, Karen Simonyan 소속: Imperial College London, DeepMind 발표: ICLR 2020 논문: https://arxiv.org/abs/1909.11646 코드: https://github.com/mbinkowski/DeepSpeechDistances (Frechet DeepSpeech Distance) - GAN-TTS, 말그대로 GAN을 사용한 TTS(Text-..
[논문리뷰] MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis (NeurIPS19) 논문제목: MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis 저자: Kundan Kumar, Rithesh Kumar, Thibault de Boissiere, Lucas Gestin, Wei Zhen Teoh, Jose Sotelo, Alexandre de Brebisson, Yoshua Bengio, Aaron Courville 소속: Lyrebird AI, Mila, University of Montreal 발표: NeurIPS 2019 논문: https://arxiv.org/abs/1910.06711 코드: https://github.com/descriptinc/melgan-neurips 오디오샘플: https:/..
[논문리뷰] GANSynth: Adversarial Neural Audio Synthesis (ICLR19) 논문제목: GANSynth: Adversarial Neural Audio Synthesis 저자: Jesse Engel, Kumar Krishna Agrawal, Shuo Chen, Ishaan Gulrajani, Chris Donahue, Adam Roberts 소속: Google AI 발표: ICLR 2019 논문: https://arxiv.org/abs/1902.08710 코드: https://github.com/magenta/magenta/tree/main/magenta/models/gansynth 샘플오디오: https://storage.googleapis.com/magentadata/papers/gansynth/index.html - GAN으로 좋은 퀄리티의 오디오를 합성해내보자. WaveNe..
[논문리뷰] Adversarial Audio Synthesis (ICLR19) 논문제목: Adversarial Audio Synthesis 저자: Chris Donahue, Julian McAuley, Miller Puckette 소속: UC San Diego 발표: ICLR19 논문: https://arxiv.org/abs/1802.04208 코드: https://github.com/chrisdonahue/wavegan 사운드 샘플: https://chrisdonahue.com/wavegan_examples/ - GAN을 왜 오디오 생성에 사용하지 않지? 이 논문이 나온 2018-19년대에는 벌써 GAN이 나오고도 몇 년이 지나고 벌써 많은 발전이 이루어졌을 시기. 한번 GAN으로 웨이브폼 오디오를 만들어보겠음. - WaveGAN과 SpecGAN이라는 두 가지 모델을 제안. 이름..
[논문리뷰] Diff-TTS: A Denoising Diffusion Model for Text-to-Speech (INTERSPEECH21) 제목: Diff-TTS: A Denoising Diffusion Model for Text-to-Speech 저자: Myeonghun Jeong, Hyeongju Kim, Sung Jun Cheon, Byoung Jin Choi, Nam Soo Kim 소속: Seoul National University, Neosapience 발표: INTERSPEECH 2021 논문: https://arxiv.org/abs/2104.01409 웹페이지: https://jmhxxi.github.io/Diff-TTS-demo/index.html - 최근 Diffusion Model을 이용한 오디오 생성기법들[Chen21][Kong21]이 소개됨. 그런데 이 논문들에서는 숫자정도를 조건으로 넣어서 생성하는 것까지는 제안하였..