2020年 1月 30日 Thu 
累積人次:0001521 

最新消息 (news)
活動列表 (program)
緣起 (origin)
現階段目標 (current goal)
可能合作對象
(possible cooperators)
目前成員
(current research mates)
運作方式 (operation)
機器學習與演化計算簡介
(introduction to machine learning and evolutionary computation)
現階段主要研究方向
(current research directions)
目前正進行之研究計畫
(current research projects)
與我們聯絡 (contact us)
歷史公告
(historical bulletin)
古典控制工具箱
(classical control toolbox)
支撐向量機工具箱
(support vector machine toolbox)
實用R程式設計
(Practical R Programming)
實用Python程式設計
(Practical Python Programming)
Lead-lag compensator
design toolbox based on step
response using particle swarm
optimization
Lead-lag compensator
design toolbox based on step
response using cuckoo search
Lead-lag compensator
design toolbox based on Vector
Margin using particle swarm
optimization
Lead-lag compensator
design toolbox based on Vector
Margin using cuckoo search
Pathways to Machine Learning and Soft Computing
實用深度學習
(Practical Deep Learning)

機器學習與演化計算簡介:

  機器學習與演化計算可視為計算智慧 (computational intelligence) 及應用統計 (applied statistics) 之巧妙結合。其應用範疇十分廣泛,包含工程、管理及醫學等領域。眾家學者對這個領域的定義並無共識,本人對這個領域之認知可用下圖來表示,其中大圓圈內的是代表各式各樣的學習機 (learning machines),大圓圈外的是代表各式各樣可能需要的(數學)工具,而大方框內的則是一些可能的應用範疇。

Machine Learning & Evolutionary Computation

  機器學習 (machine learning) 之主要目的是對給定的一些量測或觀測資料建立一個預測模型(即學習機)。當有新的輸入資料時,這已經訓練好的預測模型就可以預測出相對的輸出值。這樣的問題與一般的統計回歸理論所面臨的問題是沒什麼兩樣。十分不一樣的地方在現今流行之學習機主要是以類神經網路架構為基礎之預測模型,如類神經網路 (Artificial Neural Network, ANN)、廣義徑幅函數網路 (Generalized Radial Basis Function Network, GRBFN)、模糊神經網路 ( Fuzzy Neural Network, FNN)、支撐向量分類器 (Support Vector Classifier, SVC) 及支撐向量回歸器 (Support Vector Regressor, SVR) 等。這些學習機大都屬於非參數化之回歸模型 (nonparametric regression models),也就是說,我們並不對最終之預測模型之函數型式作過多的假設,因此這些學習機可適用於更廣之資料型態。但本人覺得機器學習與統計回歸理論之間的結合仍不夠緊密,我正是我們可以大加著墨之處。

  演化計算 (Evolutionary Computation, EC) 主要是基於模仿演化及生物行為而發展之最佳化演算法。這些演化計算法是具廣義目的之群體最佳化演算法 (general-purpose population-based optimization algorithms)。它們可適用於許多困難的最佳化問題上。有許多學者認為這些演化計算法包括了基因規劃 (Genetic Programming, GP)、演化策略 (Evolution Strategy, ES)、演化規畫 (Evolutionary Programming, EP)、基因演算法 (Genetic Algorithm, GA)、及粒子群體最佳化演算法 (Particle Swarm Optimization, PSO)。有也不少學者將模擬退火 (Simulated Annealing, SA) 及蟻群最佳化演算法 (Ant Colony Optimization, ACO) 納入。當然這些演算法可以大力協助我們解決許多困難的機器學習問題。

機器學習與演化計算之發展簡史如下表所列:

Brief history of machine learning and evolutionary computation

 Year(s) Name(s) Event Description

1936

Fisher Discriminant analysis

1943

McCulloch
Pitts

First mathematical model for the artificial neuron

1958

Rosenblatt First model of learning machine (perceptron) for classification; True beginning of the mathematical analysis of learning processes

1958

Friedberg Genetic Programming (GP)

1960

Widrow
Hoff

Adaptive linear neuron (Adaline) for regression by using the delta learning rule

1962

Novikoff First (convergence) theorem about the perceptron

1962

Holland Genetic Algorithm (GA)

1963

Tikhonov Regularization method for solutions of ill-posed problems

1965

Zadeh Fuzzy mathematics

1965

Rechenberg
Schwefel

Evolution Strategy (ES)

1966

Fogel
Owens
Walsh

Evolutionary Programming (EP)

1969

Minsky
Papert

Simple biologically motivated learning systems (perceptrons) were incapable of learning an arbitrarily complex problem. (negative result)

1971

Vapnik
Chervonenkis

Statistical learning theory

1982

Hopfield Hopfield network

1982

Vapnik Introduction of regularization theory into machine learning

1986

Rumelhart
Hinton
Williams
Le Cun

Error back-propagation algorithm (generalized delta learning rule) for multi-layer neural networks (direct generalization of perceptrons)

1988

Chua
Yang

Cellular Neural Network (CNN)

1989

Poggio
Girosi

Radial Basis Function Network (RBFN)

1989~1991

Goldberg
Davis

Popularization of genetic algorithm

1991

Koza Improvement of genetic programming

1992

Vapnik Support Vector Machine (SVM)

1995

Kennedy
Eberhart

Particle Swarm Optimization (PSO)




義守大學機器學習與演化計算研究群
Designed by cutemark 2009.1