Principia Discordia

Principia Discordia => Techmology and Scientism => Topic started by: P3nT4gR4m on June 30, 2016, 06:31:41 pm

Title: Blaise Aguera doing a pretty good 101 in neural nets
Post by: P3nT4gR4m on June 30, 2016, 06:31:41 pm
http://www.ted.com/talks/blaise_aguera_y_arcas_how_computers_are_learning_to_be_creative#t-539223 (http://www.ted.com/talks/blaise_aguera_y_arcas_how_computers_are_learning_to_be_creative#t-539223)

So I watched a couple of youtube clips a while back, that explained what learning algorithms are up to well enough for my brain to take in but, for anyone who's still trying to wrap their heads around it, the first half of this ted talk does a pretty good job of summing it up.
Title: Re: Blaise Aguera doing a pretty good 101 in neural nets
Post by: LuciferX on July 03, 2016, 08:29:38 pm
http://www.ted.com/talks/blaise_aguera_y_arcas_how_computers_are_learning_to_be_creative#t-539223 (http://www.ted.com/talks/blaise_aguera_y_arcas_how_computers_are_learning_to_be_creative#t-539223)

So I watched a couple of youtube clips a while back, that explained what learning algorithms are up to well enough for my brain to take in but, for anyone who's still trying to wrap their heads around it, the first half of this ted talk does a pretty good job of summing it up.

Yea.  I liked this one.

W.X = y
W.X - y = 0

Very simple, though not always entirely obvious.
Title: Re: Blaise Aguera doing a pretty good 101 in neural nets
Post by: Brother Mythos on July 06, 2016, 11:52:20 am
http://www.ted.com/talks/blaise_aguera_y_arcas_how_computers_are_learning_to_be_creative#t-539223 (http://www.ted.com/talks/blaise_aguera_y_arcas_how_computers_are_learning_to_be_creative#t-539223)

So I watched a couple of youtube clips a while back, that explained what learning algorithms are up to well enough for my brain to take in but, for anyone who's still trying to wrap their heads around it, the first half of this ted talk does a pretty good job of summing it up.

As I’m starting out with zero (0) knowledge of artificial neural networks, I think it’s going to take just a little bit more to get me up to 101 level.

Nevertheless, I found the video clip interesting enough to want to learn more about the subject. So, I’m starting with the ‘Artificial neural network’ article on ‘Wikipedia’, and I’ll see where that leads me.
Title: Re: Blaise Aguera doing a pretty good 101 in neural nets
Post by: P3nT4gR4m on July 06, 2016, 06:16:29 pm
These 3 vids are the ones that I found the most useful in helping me grasp the concept. I was coming at it from a programmers point of view - "How do I code it" Turns out that was the wrong question. You don't code it, it just works.

CNN's are only part of the picture but they're a good starting point. Now I I can grok them, a lot of the other stuff makes more sense.

https://youtu.be/l42lr8AlrHk (https://youtu.be/l42lr8AlrHk)

https://youtu.be/C_zFhWdM4ic (https://youtu.be/C_zFhWdM4ic)

https://www.youtube.com/watch?v=py5byOOHZM8 (https://www.youtube.com/watch?v=py5byOOHZM8)
Title: Re: Blaise Aguera doing a pretty good 101 in neural nets
Post by: Brother Mythos on July 07, 2016, 02:12:04 am
These 3 vids are the ones that I found the most useful in helping me grasp the concept. I was coming at it from a programmers point of view - "How do I code it" Turns out that was the wrong question. You don't code it, it just works.

CNN's are only part of the picture but they're a good starting point. Now I I can grok them, a lot of the other stuff makes more sense.

https://youtu.be/l42lr8AlrHk (https://youtu.be/l42lr8AlrHk)

https://youtu.be/C_zFhWdM4ic (https://youtu.be/C_zFhWdM4ic)

https://www.youtube.com/watch?v=py5byOOHZM8 (https://www.youtube.com/watch?v=py5byOOHZM8)

Thanks for the additional links. I’ll check them out and see if I can wrap my head around this subject.
Title: Re: Blaise Aguera doing a pretty good 101 in neural nets
Post by: P3nT4gR4m on July 07, 2016, 09:01:59 am
Reminds me a bit of when I learned oop - months of people talking complete gobbledygook and then suddenly I just saw it.  :lulz:
Title: Re: Blaise Aguera doing a pretty good 101 in neural nets
Post by: Mesozoic Mister Nigel on July 07, 2016, 09:21:19 pm
Slightly different topic, but related and I thought this might be of interest to you: https://www.coursera.org/learn/computational-neurosciencecompneuro
Title: Re: Blaise Aguera doing a pretty good 101 in neural nets
Post by: P3nT4gR4m on July 08, 2016, 08:06:31 am
Cheers, Nigel. I keep hearing about computational neuroscience but I get the impression it's more biology than computer science?

Machine learning and related AI stuff seems to pick up any interesting mathematical results neuroscience spits out and figures out if they can be used in ML models. The keyword is "inspired by". I'm just about at the point now where I'm ready to start arsing about with these models. In hacker terminology "hello world". Compile a program that prints your name on the screen. The science part I'm happy to leave to the scientists ;)

First I need to pick an api from the seven billion available.  :eek:
Title: Re: Blaise Aguera doing a pretty good 101 in neural nets
Post by: Mesozoic Mister Nigel on July 09, 2016, 08:09:05 pm
Cheers, Nigel. I keep hearing about computational neuroscience but I get the impression it's more biology than computer science?

Machine learning and related AI stuff seems to pick up any interesting mathematical results neuroscience spits out and figures out if they can be used in ML models. The keyword is "inspired by". I'm just about at the point now where I'm ready to start arsing about with these models. In hacker terminology "hello world". Compile a program that prints your name on the screen. The science part I'm happy to leave to the scientists ;)

First I need to pick an api from the seven billion available.  :eek:

I personally don't consider it biology, but more a crossover between biology and data systems; basically applied systems science. It is essentially neural modeling using computers, to the degree that current computer technology is capable of mimicking biological neural networks. The reason it might be interesting to you or to some of the other folks here is because if you have a baseline understanding of how computational neuroscience works, it can help give you a bigger-picture understanding of what the programmer's version of neural networks is meant to do. You don't have to have a biology background to learn computational neuroscience fundamentals; a lot of people actually come to it via computer science or systems science. I am into wet lab work so it's not so much my bag, but it seems like it could be interesting and/or useful to more tech-minded, computer-oriented people.
Title: Re: Blaise Aguera doing a pretty good 101 in neural nets
Post by: Mesozoic Mister Nigel on July 09, 2016, 08:18:40 pm
The MIT course page gives this relatively concise description:

Quote
This course gives a mathematical introduction to neural coding and dynamics. Topics include convolution, correlation, linear systems, game theory, signal detection theory, probability theory, information theory, and reinforcement learning. Applications to neural coding, focusing on the visual system are covered, as well as Hodgkin-Huxley and other related models of neural excitability, stochastic models of ion channels, cable theory, and models of synaptic transmission.
Title: Re: Blaise Aguera doing a pretty good 101 in neural nets
Post by: P3nT4gR4m on July 09, 2016, 10:51:18 pm
Yeah, cool. Its the thing I thought it was. A lot of the ai gurus have backgrounds in "neuroscience" I'm guessing the computational flavour would be like a bridge. I look across it and I see icky, gooey stuff, waiting to pounce on me and I'll place a bet a biology person would look across to this side and see weird mechanical stuff made of robots and numbers.

I heard about some results where simulated results on visual cortex matched the real meat version in some measurable way. That slicing thing you were doing? But I know those meat models are heavy fucking duty compute power, a long way off from use case in any sort of production scenario. The coding equivalent is more high level logic function. What you lack in fidelity, you gain in footprint and scalability.

Intelligence itself is beginning to be clearly understood and described in math. What we're discovering is it can be improved, in machines. Massively. Not personality or intellect,  those are fucked up weird shit that nobody seems to be able to work out yet but raw intelligence. They're calling it "artificial" but really it's more a case of "optimised",  or "ampliified" 

I'm doing what I've been doing since I was a kid - looking for problems and applying computatiinal solutions, only this time I don't need to program it. This will be like cheating for a living.  :lulz:
Title: Re: Blaise Aguera doing a pretty good 101 in neural nets
Post by: Mesozoic Mister Nigel on July 09, 2016, 11:54:09 pm
As a biology person with a pretty solid math and systems science background, mostly what I see are experimental models that seek to approximate the information networks present at all levels of biology; cell signaling and so on. I find it interesting that nobody seems to be trying to mimic signaling cascades, yet, but maybe they are and just haven't published yet.

As a neuroscientist I have to disagree with the perception that anyone is doing mathematical modeling of cognitive intelligence, yet; intelligence as an economist defines it, yes, but economists are worlds away from actual cognition. I suppose it all comes down to how we define intelligence; from a neuroscience perspective, it is typically defined as a combination of learning, memory, and cognition, rather than as simple logic pathways.
Title: Re: Blaise Aguera doing a pretty good 101 in neural nets
Post by: P3nT4gR4m on July 10, 2016, 12:23:48 am
One of those cross discipline,  same terminology deals? Intelligence in the ai vernacular is like a black box version of software but much more powerful. We understand why it works but the actual state,  in old money the program logic, is something that just happens. It learns. It learns to perform intelligent operations on input data. It does stuff that would take millions of coders millions of years to hand code. If it could even be hand coded at all.

Considering the impact the olde fashioned, hand coded shit just had on the planet, I'm fast approaching certainty that no one aint seen nothing yet.
Title: Re: Blaise Aguera doing a pretty good 101 in neural nets
Post by: Freeky on July 10, 2016, 01:34:23 am
As a neuroscientist I have to disagree with the perception that anyone is doing mathematical modeling of cognitive intelligence, yet; intelligence as an economist defines it, yes, but economists are worlds away from actual cognition.
:lol:
I'm putting this in my sig.
Title: Re: Blaise Aguera doing a pretty good 101 in neural nets
Post by: Mesozoic Mister Nigel on July 10, 2016, 01:44:21 am
One of those cross discipline,  same terminology deals? Intelligence in the ai vernacular is like a black box version of software but much more powerful. We understand why it works but the actual state,  in old money the program logic, is something that just happens. It learns. It learns to perform intelligent operations on input data. It does stuff that would take millions of coders millions of years to hand code. If it could even be hand coded at all.

Considering the impact the olde fashioned, hand coded shit just had on the planet, I'm fast approaching certainty that no one aint seen nothing yet.

Yeah, I know dude. I don't talk about it much but I have a CIS background, and we covered AI and naive intelligence ad nauseum in upper-division neuroscience classes.
Title: Re: Blaise Aguera doing a pretty good 101 in neural nets
Post by: LuciferX on July 10, 2016, 02:40:46 am
One of those cross discipline,  same terminology deals? Intelligence in the ai vernacular is like a black box version of software but much more powerful. We understand why it works but the actual state,  in old money the program logic, is something that just happens. It learns. It learns to perform intelligent operations on input data. It does stuff that would take millions of coders millions of years to hand code. If it could even be hand coded at all.

Considering the impact the olde fashioned, hand coded shit just had on the planet, I'm fast approaching certainty that no one aint seen nothing yet.

Enjoying all the links here.  What I got is before, in ancient procedural procedural terms, you might build some object detection code as a composite of various hand-picked relevant feature detectors; now, you "train" a model to learn, for itself, which detector-like-filters work to best fit input data to output.  During training, some models learn by back-propagation, so the algorithm does a backward pass, starting from a list that already specifies the correct answer or output (y) it changes the weights of filters (W) to match the given input (X).  I think, though it still beats me.
Title: Re: Blaise Aguera doing a pretty good 101 in neural nets
Post by: P3nT4gR4m on July 10, 2016, 03:12:08 am
Would say that pretty much sums up whats going on with back propagation as I understand it. The hard part for me, as a coder, was letting go and not needing to be able to read the finished logic. It was a kind of jarring transition, coming from a background of having to micromanage every fucking step in the machines "thought process"  :lulz:
Title: Re: Blaise Aguera doing a pretty good 101 in neural nets
Post by: Brother Mythos on July 14, 2016, 12:07:52 am
These 3 vids are the ones that I found the most useful in helping me grasp the concept. I was coming at it from a programmers point of view - "How do I code it" Turns out that was the wrong question. You don't code it, it just works.

CNN's are only part of the picture but they're a good starting point. Now I I can grok them, a lot of the other stuff makes more sense.

https://youtu.be/l42lr8AlrHk (https://youtu.be/l42lr8AlrHk)

https://youtu.be/C_zFhWdM4ic (https://youtu.be/C_zFhWdM4ic)

https://www.youtube.com/watch?v=py5byOOHZM8 (https://www.youtube.com/watch?v=py5byOOHZM8)

I reviewed those YouTube clips and they did help me gain more of an understanding of ANNs.

I also reviewed the following Wikipedia article on ANNs: https://en.wikipedia.org/wiki/Artificial_neural_network

In the ‘External Links’ section of the above article they list the following link: http://www.dkriesel.com/en/science/neural_networks

On that site you can download the book, A Brief Introduction to Neural Networks by David Kriesel, in PDF form. I downloaded the book, read a number of chapters in depth, and skimmed through the rest. Overall, I found the book very helpful.

That Wikipedia article on ANNs also led me to their article on Machine learning: https://en.wikipedia.org/wiki/Machine_learning

This article was also helpful, and the ‘Software’ section lists ‘Free and open-source software’. Seeing that, I went over to SourceForge and searched ‘artificial neural network’. To my surprise, I got 751 program hits!

Here’s my SourceForge link: https://sourceforge.net/directory/os:windows/?q=artificial%20neural%20network

So, on the surface, it appears that I just might be able to treat an ANN software package like an electrical engineer’s ‘black box’, and make it work for me as long as I can figure out how to program the inputs and outputs.

Now I have to decide if I want to try to take my new found, but superficial knowledge to the next level. Just for the hell of it, I did some ‘if/then’ AI programming for an open-source game a few years ago. So, I do have some programming skills, and the right ANN software package just might be adaptable to that particular open-source game, or another one I’m familiar with. Still, as I only get the urge to write code on a ‘once in a blue moon’ basis, I have to go through a ‘relearning curve’ every time I do it.

Anyway, I now know a little bit about Artificial Neural Networks. So, thanks for your original and your follow-up posts.