Paper Information
B. Zoph and Q. V. Le. Neural architecture search with reinforcement learning. In Proceedings of the International Conference on Learning Representations (ICLR), 2017
Introduction
深度學習的網路架構往往需要靠人為當作是參數來調整,這不僅耗時也耗力.而這篇論文提出如何"學"出
B. Zoph and Q. V. Le. Neural architecture search with reinforcement learning. In Proceedings of the International Conference on Learning Representations (ICLR), 2017
深度學習的網路架構往往需要靠人為當作是參數來調整,這不僅耗時也耗力.而這篇論文提出如何"學"出
J. Deng, J. Guo, and S. Zafeiriou, “ArcFace: Additive Angular Margin Loss for Deep Face Recognition”, arXiv.
人臉辨識若用在deep learning 除了要有好的model外, 如何計算loss也很
Completely Unsupervised Phoneme Recognition by Adversarially Learning Mapping Relationships from Audio Embeddings
Authors: Da-Rong Liu, Kuan-Yu Chen, Hung-Yi Lee, Lin-shan Lee
Laurens van der Maaten, Geoffrey Hinton. Visualizing Data using t-SNE; The Journal of Machine Learning Research, 2008.
在機器學習的領域中,我們常希望能從資料中抓取近可能少但重要的feature,因為不必要的feature會造成overfitting. PCA可以幫我們找到應該被保留的特徵,以此做到降維的目的。雖然PCA效果還不錯,但PCA是線性降維,另一個常用的降維方法是SNE,SNE的視覺化效果比PCA來得好,但有低維空間有crowding問題.
Li Wan et. al. “A Hybrid Neural Network-Latent Topic Model”, JMLR 2012
Probabilistic graphical model 和 neural network model 都是機器學習裡很熱門的方法,本篇嘗試將hierarchical topic
Mairal, Julien, et al. “Online dictionary learning for sparse coding." Proceedings of the 26th annual international conference on machine learning. ACM, 2009.
Sparse coding指的是將高維空間的data想成是一些基本元素(basis)的相加,有點類似抽feature的概念. 論文中要求的D很類似於auto-encoder的decoder的部分,與D不同在於: auto-encoder
Veit at al. Learning From Noisy Large-Scale Datasets With Minimal Supervision. CVPR 2017.
有些時候,data容易取得,帶卻沒有相對應的label.在人類的學習過程中,絕大多數都是unsupervised的,因此unsupervised learning在物理上是可行的.至於該如何具體實現是一難題.
Paper Information
Gong, Yunchao, and Svetlana Lazebnik. “Iterative quantization: A procrustean approach to learning binary codes.