Artificial Minds - Spring 2009 Class notes for Monday, March 9 (week 8) =============================================================================== DISCLAIMER: These are my own working lecture notes, which I'm posting here in case they're useful. I'll try to post my notes on a more or less regular basis (assuming that I have some to post), but I make no guarantees as to their completeness or comprehensibility. They're also not too pretty, unless you're a fan of plain text and ASCII art, so I'd recommend using them as a supplement rather than a replacement for your own note-taking. In other words, you're still responsible for the material we cover in class, whether or not that material is included here. =============================================================================== Biological Neurons - you have around 100 billion neurons in your brain - each neuron is connected to roughly 1000-10000 others - hundreds of trillions of connections in all - connection strengths change with experience Draw picture of a neuron - cell body - dendrites - synapses - axon Brains are good at: - pattern matching - face recognition - speech recognition - language understanding - language generation - analogical thinking Brain-style computation - parallel, not serial - minimax search is an example of serial processing - von Neumann architecture vs. connectionist architecture - the 100-step limit - graceful degradation - can handle noisy, incomplete, or inconsistent information - adaptable (learning) Artificial neurons - inputs (binary or continuous) - output (binary or continuous) - weights - net input: weighted sum of inputs * weights - activation function: threshold - activation function: sigmoid - activation function: linear output: [ ] threshold / \ weights: w1 / \ w2 / \ [ ] [ ] inputs: x1 x2 net input = x1 * w1 + x2 * w2 Example: weights: w1 = 0.8, w2 = 0.4 input pattern: x1 = 0.7, x2 = 0.9 threshold = 0.5 net input = 0.7 * 0.8 + 0.9 * 0.4 = 0.92 0.92 > threshold, so output of neuron = 1 in response to input pattern McCulloch and Pitts (1943): With suitable choices for threshold and weight values, networks of artificial neurons can be made to simulate any logical function. Example: output input1 input2 OR AND ------------------------------ 0 0 0 0 0 1 1 0 1 0 1 0 1 1 1 1 logical AND threshold = 0.5 weights = 0.3, 0.3 logical OR threshold = 0.1 weights = 0.2, 0.4 How to set the weights? - perceptron learning rule - error = target - output - w(t+1) = w(t) + learning_rate * input * error Example: - learning_rate = 0.2 - random weights = -0.2, 0.4 - threshold = 0.1 - training data (OR): 0 0 -> 0 0 1 -> 1 1 0 -> 1 1 1 -> 1 1. present 0 0 -> output 0 (correct) 2. present 0 1 -> output 1 (correct) 3. present 1 0 -> output 0 (wrong! should have produced 1) error = target - output = 1 - 0 = 1 4. adjust weights: w1 = -0.2 + (0.2 * 1 * 1) = 0 w2 = 0.4 + (0.2 * 0 * 1) = 0.4 5. present 1 1 -> output 1 (correct) 6. repeat until no further adjustments are necessary