@
wizardforcel @
wizardforcel How can you be so confident that ML - DL is not bleeding edge research given you only know /something/ about clustering/classifying? SVM is also a large concept. I do not think a "培训班" will tell you anything about the representation theorem in feature space. I am also wondering whether you know anything about reinforcement learning, which is another main topic in ML.
I do admit that some elementary technique in ML is understood by some ordinary programmers, as we all know how to compute +-*/ in math. "ML - DL is not bleeding edge research" is something like "Math - Algebraic geometry is not bleeding edge research" :-)
The reason that I mention optimization frequently is that nearly all ML problems are essentially optimization problems. For example, given that you have already known SVM, I'll use SVM as an example :) The soft margin SVM or equivalently, the slack variable SVM, is the same as the optimization problem of hinge loss regularized by L_2 norm regularization term. The state of art method to solving SVM is using the popular optimization algorithm --- stochastic coordinate descent.
BTW, do not think too highly of yourself if you learn something about ML in one year :-) Within 6 months of learning ML, I had published spotlight research paper in NIPS. I'm not one of the best in the field now and we both need to learn new stuff to become better = D