Of camels and committees

I did a Biota podcast last night and Tom understandably asked me a little about my views on open source and collaborative development. I didn’t give a very good answer, but the subject keeps coming up, lately, so I thought I’d write a post about it to try to explain my position. People want to know why I don’t plan to develop my game as open source. Why don’t I collaborate with others (often specifically the person asking the question) and hence do a far better job than I can possibly do on my own? Why am I so opposed to teamwork? Why am I so stuck up and antisocial? (Alright, nobody actually asks that, but sometimes I suspect that’s what they’re thinking.)

I’m really not opposed to collaboration. Not at all. Nor open source. It just doesn’t work well for me personally, and in particular for this application. Collaboration is the norm, so it’s not like I’m discriminating against a minority here. It’s practically compulsory in many areas. Just try getting a European Commission science grant without including at least three different countries in the team. If it weren’t for Kickstarter and you lovely generous people I’d have little hope of getting my work funded at all, and for over a decade I’ve had to fund most of it myself. But that doesn’t mean collaboration is necessarily always the best way to go about things.

In the case of my Grandroids project, writing a computer game isn’t the objective, it’s the intended outcome. These are actually very different things. For instance, the intended outcome for the Kon-Tiki expedition was to arrive at the Tuamoto Islands, but it wasn’t the objective. If it was the objective then Thor Heyerdahl could simply have got on a plane. Any decent pan-European research collaboration could have told him that. At least after a few committee meetings to thrash out the reporting requirements.

If the game I’m writing was merely the objective then a bunch of us could sit down and discuss how we were going to achieve it. But for me it’s very much the other way round. I already have a theory that I’m trying to develop, and the game is intended to be an entertaining and useful expression of that theory. But the theory is in my head; it isn’t fully developed yet, and so I can’t delegate parts of it or even explain it properly to people. It therefore has to be a conversation between me and a computer.

And it’s not like I can even farm out the peripheral stuff. Not yet, anyway. The graphics and physics engines could be farmed out if it weren’t for the fact that they’re already written and I’ve bought the licence (in any case, without them I couldn’t do my part, so they had to come first). Even the 3D creature design is a biological issue, not predominantly an artistic one, because I’m using the physics engine and virtual muscles to control it, rather than conventional animation, so the weight distribution and anatomy have to work hand-in-hand with the muscle control system, which in turn is very co-dependent on how the brain is developed. If someone designs a beautiful creature but when I plug it into my code it keeps falling over, it’s not going to be held up by Art alone. Whereas if I develop the 3D art as well as designing the low-level postural control in the brain, my left hand can learn from my right and vice versa. These iterations occur on a minute-by-minute basis and I get a direct, personal insight into both the art and neuroscience problems that I would never have been able to take advantage of if someone else had done the graphics. This is why I’ve been building robots by myself, too. It was developing the electronics and signal processing that gave me insights and ideas into how the human brain might work, and it was neuroscience and biology that gave me new ideas about how to design the electronics and mechanics. Those intimate connections between apparently disparate ideas are the fuel for creativity. The creative act is primarily an act of analogy.

And all that has to happen inside a single brain, because in the brain ideas can connect up in myriad ways that aren’t confined to language and drawings. I don’t have any translation problems in my head; I don’t send memos to myself and then misread them; I understand every single word I say, which is rarely the case when I’m discussing things with other people. If I was a painter, this would be far more self-evident. It’s not like  Michelangelo could have restricted himself to painting the faces on the Sistine Chapel ceiling while other team members chose the layout, focus-grouped the storyline, painted the arms, etc. It had to be a single creative act. Although now that I think about it, perhaps that explains the Venus de Milo…

In computing terms it’s somewhat similar to Linux. Zillions of people can maintain Linux and add to it now, but the core of it had to come out of Linus Torvalds’s head. Yet, even then, people already knew what an operating system was and roughly how to go about designing one. That’s far from the case in AI. We know hundreds of ways not to do it, but how to actually achieve it is still an open question. There are plenty of other, often well-funded attempts to sit round a table and figure out how to create AGI collaboratively, so if that’s the best way to go about it we’ll soon find out. But sometimes a better way to search an area is for everyone to spread out and follow their own nose. I have a specific route that I want to follow, I can’t explain it to anyone else in a way that would enable them to see exactly what I have in my mind, so it’s best for me if I just stay in my hermitage and write code. Sometimes code is the best way to explain an idea.

So, I really have nothing against collaboration or open source software per se, although if you’d asked me that yesterday morning, while I was up to my neck in CentOS, I might well have given a different answer.

Advertisements

About stevegrand
I'm an independent AI and artificial life researcher, interested in oodles and oodles of things but especially the brain. And chocolate. I like chocolate too.

36 Responses to Of camels and committees

  1. Vegard says:

    Open Source/Free Software is not about collaboration.

    It’s about letting others make a copy of your work and modify it to suit their needs. You wouldn’t even have to know about it.

    There is a world of difference between the two.

    (On an unrelated note, congratulations with the huge success of your fundraiser!!! I am really happy for you and I can’t wait to see the result.)

    Vegard

  2. Chani says:

    copyleft is only one aspect of FOSS culture (a very important one, yes, but still one). collaboration is another. so yes, free software *is* about collaboration. it wouldn’t be very efficient if everyone just forked off and worked on their own – there are good reasons forking is often considered the “nuclear option”.

    that said… there have been times when I really wished I could dig into the Docking Station source code and fix a bug (I think I knew more about that engine than its designers did by the time I left the CC). I had to give up on some neat cob ideas because of bugs in the engine, or write rather elaborate workarounds (the ability to change both the sprite and the colour of the Hand was one such hack, iirc).

    So long as Steve’s actively working on it it’s not a problem, but one day he’ll have moved on… it would be nice if he could open source it when that happens. if that’s going to be legally possible. and if he doesn’t get hit by a bus first. 🙂

  3. stevegrand says:

    Heh! I’ll do my best not to get hit by a bus, I promise, but if I leave this project and move on to something else then I’ll be sure to release the source!

    • derp says:

      Have you considered putting that in your will in case your best efforts not to get hit by a bus turn out to be insufficient?

  4. Vadim says:

    Makes sense to me, even as an OSS developer 🙂

    I find it necessary to get some stuff started before I do a release. An empty project is of no use to anyone, one that only builds on my system would only generate complaints at best, and I also usually have ideas on how the overall design should work.

    But I’ll second Chani here and say that it’d be awesome if the source got released at some point. You’re not building yet another FPS here, but something that looks like it will be very unique. It’s too complicated to be easily cloned, so I think there’d be great value in having it preserved in a form where somebody else can continue it if you stop work on it.

  5. Carl Lumma says:

    @Chani @Vegard Did you comment without reading this post or something?

    @Stvee Less talk, more work.

    • stevegrand says:

      Yes sir. Working now sir. Sorry sir. 😉

    • Vegard says:

      I read the post. I read it again after I saw your comment. But I don’t understand why you would ask that. I tried to find the podcast, but it appeared to not be published yet, so I didn’t think that could be the issue. Did you reply without reading my comment or something?

      I simply wanted to clear up the apparent misunderstanding that Open Source/Free Software is necessarily about collaboration. For an example, see Jason Rohrer‘s work. He produced several games completely on his own, and the source code is open.

      • stevegrand says:

        Yeah, Jason Rohrer and his family of four live on $14,500 a year. I guess that may be doable when your old age is still a very long way away! 🙂 That wouldn’t even pay my rent and basic utilities, and I’m not exactly living it up here!

        I do get what you were saying. OS and collaboration aren’t the same thing in principle, but I also agree with Chani that they usually are in practice. I was specifically talking about collaboration in this post – I just lumped it in with OS because Tom asked me about both, and when people tell me I “ought” to collaborate they usually mean under OS conditions. Maybe I should have been clearer.

        It’s a topic that gets people VERY hot under the collar, and for some reason I always end up being the bad guy for not wanting to work with other people (most often the people who get pissed at me for not working with THEM, which is weird because it’s not like I’m stopping them from collaborating with whoever they please, just me). I’ve even been accused of holding back the progress of AI because I don’t give my work away for free, like I’m some kind of public service. Most of the people who moan about it have jobs and salaries, and AI is their hobby. If it was my hobby I wouldn’t be able to do anything other than tinker with it like most of them. And my career is in tatters now in large part because a few years ago I spent a long time TRYING to collaborate with people who just let me down. I did a lot of work on several projects for nothing and in the end the projects just died.

        All I wanted to say in this post is that there are reasons why I choose to work alone, and I’m not doing it just to piss people off or because I’m antisocial! 😉

  6. Ken Albin says:

    My own view of it is that you should keep the controlling reins of the project. I have seen too many open source projects split off into so many different varied directions that the project loses its focus. Down the road if for some reason you decide to go off in a different direction with your interests then that would be the time to open source the project. Until then I think most people would be happy just being able to work with the project in the form of adding to agents and tinkering with peripheral aspects of the creatures without directly impinging upon the core program, in a similar manner to what they did in the Creatures series.

    • stevegrand says:

      Thanks Ken. And I can do that a lot better than I did in Creatures, too, so even if the core code is closed the API will make the system as a whole pretty open. When I started writing Creatures it was for MS-DOS! Part-way through I ported it to Win 3.1 (a rare and brave thing in the games industry at the time). That allowed me to use DDE to open up the core code to external tools. I did it just for my own purposes, so that my company would be able to add new tools or amend existing ones. Then Win 95 went into Beta, so I ported my code over to that and used OLE to open the API. But these technologies were very primitive and clunky compared to today’s interoperability methods. There were no JIT compilers, no network-transparent protocols, no nothing. So this time I can do it better!

  7. Steve,

    Stick to your guns. Grandroids is your baby and although it’s polite to offer help, it would likely take you longer to explain your project to a collaborator than to finish it yourself.

    As a contributor; I too would love to see the finished product sooner rather than later; but if left with the choice of “quick and sloppy” or “slow and neat” I’d choose the latter; any day of the week (and twice on Sundays).

    Gerry

  8. Just another vote of confidence for the antisocial way of working. Some ideas need to develop inside one brain, especially if it’s a brain with years of experience. I’ve found discussing my project ideas with clever people whom I know and trust very helpful. But, at the end of the day, when someone has to iron out the devil in the details* and then code up the whole thing, there’s only one person with enough information, motivation and insanity to sit down and do it all.

    And in the case of Grandroids… seriously, Steve is already sharing a lot. If you’ve read his brainstorm posts (links to them at the bottom of this post) and if you’re a hobbyist artificial life creator, you will have found plenty to think about and tinker with. I sincerely hope that he keeps sharing his ideas with the rest of us. But, if it comes down to a choice between that and seeing the game come to life, I’d choose the game every time.

    Carl

    [* Ok, so that expression was a bit gratuitous, but I couldn’t let go of the image. ]

  9. Lyle Smith says:

    If the bumblebee had gotten sidetracked, it would still bee walking.

    I believe Grandroids will fly. Stay on track, your doing just fine.

  10. Jane Prophet says:

    I am all for collaboration but a complex project, where every aspect (from droids form, colour, interior structure etc etc) impacts on the rules/behaviour of the alife forms and environment would require a ‘deep’ collaboration – a period of trust-building between partners and much time dedicated to discussion. The result would be a rich project, but a different project than Grandroids.

    Grandroids looks to be rich, complex, multi-layered but as a result of one vision that will be impacted by comments from a larger community, rather than being produced by that community. To do otherwise would entail numerous collaborators working almost full-time, therefore much more money and much more risk (not just financial but also that someone would let the side down and the project would fail. Seen it over and over again).

    I have no doubt that the ‘single vision’ moderated in response to comments is the way to get this thing done, and in less than 10 years!

    Go for it, Steve, and thanks for sharing so much.

  11. Dranorter says:

    I just had a thought regarding the nature of this situation; but it’s probably all abstract nonsense, which I figured you might enjoy.

    So the question occurred to me of exactly what it is you’re doing, in the abstract. As you say, the theory isn’t precisely fully formed in your own mind, which means you aren’t simply performing the scientific act of testing a hypothesis by experiment. It’s more like you’re just pondering a question – except a computer is involved in your pondering, and when you’re done thinking you’ll have a byproduct, a neat program.

    I think the closest thing in scientific terms to what you’re doing is following a program of research. As I understand it, programs of research are not precisely scientifically justified. They are based on an intuition, or a question which is ill-defined so that it cannot be directly tested until definitions are made tighter. Surprisingly, programs of research often work out just fine as tools for scientific inquiry. What you’re doing is similar but private.

    But I wanted to think about these things in sort of a … theory of information systems. Humans think and act using at least two main information systems: thought and language; though it can certainly be subdivided somewhat, and there is an underlying but largely irrelevant genetic information system. An information system is basically a system in which successful structures propagate (like any system) and these structures can loosely be said to hold information about the world and/or patterns of behavior. Thanks to the development of an exact science of genetics, we know how one of the world’s major information system works (clearly genes can loosely be said to contain behavior patterns). Spread of information in genetics is ruled by natural selection; success in individual minds or in language/society is based on natural selection, but one of the selective pressures is sometimes an elusive thing called truth. You’ve complained on this blog before about how weak a selective force it seems to be.

    Enough of that nonsense. The point is, the internal information system, the brain, should naturally be capable of different things than the external one, language. The point of language is that it serve to transfer successful cognitive patterns from one mind to another (I guess there are two forces that maintain language as an entity – the correspondence of ideas’ success with the success of their host organism & thus the desirability of acquiring others’ ideas, and then the ability of ideas to manipulate their hosts to perpetuate their ability to spread by perpetuating language; this second force maintains, for example, church languages like Latin). Yet language is an imperfect method. Linguistically represented ideas, especially written ones, are transferrable and therefore powerful, but don’t perfectly correspond with mentally represented ideas. Much of the effort of developing science and mathematics has gone to creating a more efficient linguistic system, one in which success of ideas corresponds more closely with success of their hosts, and one in which the selective pressure known as truth is strong (different? I’m not sure.).

    I’m optimistic about individual human intelligence, attributing many evils to group- thinking, so I tend to think of science and other efforts to encourage rationality as trying to duplicate in the societal/linguistic information system some of the virtues of the individual/cognitive one. But the work is incomplete, as exemplified by your need to work alone. 😀 /rant

    If your view of cybernetics has anything to amend about my view of intelligent systems, I’d love to hear it.

    • stevegrand says:

      Wow! Somebody’s been thinking! 🙂

      I’m not sure I can add anything to that, other than maybe to muddy the waters a bit. I know people who wouldn’t distinguish between language and thought, partly because they tend to think in words. I know people who tend to think in math, and people who tend to think in still pictures. Whether they really think like that or it’s just the way it seems to them I can’t say. For my part I think visually, but not really in pictures, more actions. And I tend to “see” dynamics – feedback and stuff. Whatever that means.

      Program of research? Yes, I guess so, although that sounds a bit vague, in a “we want to look into topic X” kind of a way. What I’m doing is rather tighter than that. I do have hypotheses, but I test them initially by thought experiments, a la Einstein. It’s an interesting question the degree to which truth enters into that! Einstein imagined himself riding on a beam of light and drew conclusions from it that weren’t experimentally verified for years, but eventually they *were* verified. So what happened in Einstein’s brain that “tested” his experiments? I’m pretty sure it wasn’t logical propositions that went through his mind. I imagine he actually *saw* the consequences of riding on a beam, throwing things in trains, falling in elevators, etc. and his mental model *worked*, to the extent that it showed him surprising things.

      I’m no Einstein, of course, but I do spend most of my time performing thought experiments. I start with some speculations, like “what if the brain is like a hierarchy of servomechanisms?”, “what if it’s mapped out in various concrete and abstract coordinate frames?” Both those speculations are analogical but just metaphors. So then I start asking myself things like “what would it actually be like if you tried to make a servo out of a map?”, but even if I ask the questions in words, I don’t think about them linguistically. I try to “see” how it might work, and where it doesn’t. I might realize a need a learning rule that manipulates synapses to produce certain results, so I cast around for plausible mechanisms and insights – again it’s almost always by finding analogies. What if axons swap places with their neighbors to try to get nearer to another axon that fired recently? I can imagine that happening, so I play it through and see if it does what I need, or if not, why not. Gradually I narrow down the structures and mechanisms I need, but all of it by visually setting up imaginary machines and then watching to see how they behave. Eventually I get to the point where I know how to describe the rules in code, so I code them and see if they actually do what I expected. More often than not they don’t! Not exactly. I seem to be limited in my ability to play things out, beyond a certain complexity, so I sometimes miss something. But I can go a long way before I need to code it, and I try not to do that too soon, before I’ve thought it through a fair bit.

      So it is a process of testing hypotheses, but most of the tests are virtual! I do try to put the thoughts into words, so that I can make notes or tell people what I’m thinking about, but mostly that’s as an aide memoire, because I have a lousy memory. Words are only any use when I can use them to give people analogies. I’d hate to have to try to explain in factual detail how my models work. Code seems so much better for that, because I can see for sure whether I’ve explained it correctly or not.

      None of that has much bearing on information systems, but it certainly goes along with what you say about why I work alone! Words communicate ideas, but code has something extra about it, somehow. It’s interesting how it hasn’t yet fully infiltrated the scientific lexicon. Almost every paper has equations in it (which mean nothing to me) and absolutely all of them have words, but it’s rare to find a code snippet. I wonder why?

      • Dranorter says:

        When I started working in computer science, I assumed the papers would be overflowing with code, but they’re not! My only published paper had all the code removed beforehand. 😛 And now I’m going into math instead. Mathematical language is not terribly dissimilar from executable code though…

        I definitely enjoy doing what you’re talking about, building mental machines to test ideas, though I tend to mix it with mental manipulation of mathematical symbols (usually to assure myself of some answer I get). I’ve always taken an hour or more to fall asleep, and lately I’ve taken to having a math problem to do in my head during that time- and usually it involves visual reasoning as much as mathematical, like trying to put together objects with certain symmetries. Unfortunately these problems aren’t usually relevant to my overall quest for knowledge… the problems I really want to solve seem intractable at the moment (problem being: what questions can we answer about physics just from the idea that physics is computable? Obviously there’s more to it than that, but it’s difficult to put into words! 😉 ).

        As for muddying the waters a bit, the waters really are plenty muddy since language isn’t some independent system from thought; thought directly affects langauge. But it still seems useful to think of it as separate because of the way ‘memes’ can specifically target language as a method of propagation. More generally I’d even like to consider separate information systems within society, like the relatively separate scientific system, religious systems, &c. It seems like they all have different dynamics; for example that of trying to hold onto traditional knowledge (without trying to test whether it’s true), which is partially based on an assumption that we won’t have another chance to find out a given fact until it’s too late if we forget so we’d better remember.

      • Almost every paper has equations in it (which mean nothing to me) and absolutely all of them have words, but it’s rare to find a code snippet. I wonder why?

        This is not quite true… at least some of us have realised the importance of releasing code with academic articles. I work in the Machine Learning field and know lots of researchers who make their code available (electronically of course) with their papers and who are of the opinion that a paper should be accepted only if the code and data are made available. Otherwise experiments are not reproducible. Also the Journal of Machine Learning Research sometimes has publications that are 95% source code and 5% article. I’m making all the source code used in my PhD thesis available online (it’s in the final stages of examination, so it will still be 2-3 weeks before I can make it visible).

        Sadly, not everyone has come around to this way of working, but the movement is certainly growing. Another positive side-effect is that it forces people to write better tested code. It’s very embarrassing when someone else checks your code, finds a bug, and invalidates all your results!

        Carl

      • Erin says:

        Oh there’s lots of reasons you don’t usually find code in papers:
        – Simple length issues. Code tends to be quite whitespace-y. This isn’t such a big deal if you’re only putting it up on arXiv or something, but if you’re trying to get into a print journal its a huge concern. And prior to widespread internet usage, there was simply no efficient way to transmit the code other than to have it printed with the article. Nowadays you can add a single line such as “for example implementation visit http://www.mywebpage.com” and nothing more is needed from the printing end of things.

        – Implementations are “dirty”. In the context of “getting your hands dirty” by doing something that produces a product rather than just thinking about producing a product. For some reason (mostly historical I believe), there’s a negative connotation applied to people who actually do useful things amongst the elitist thinkers.

        – Translation issues. The most common mathematical symbols are fairly universal by now. Everyone knows that the big “E” is a summation for example, even if you can’t figure out what’s being summed over. Programming languages are all over the board. LISP and C for example have almost nothing in common outside of the ASCII character space. Someone who primarily (or only!) knows C would have a hard time reading LISP and vice-versa. (Then again math is kind of like elevator music — its universal in the sense that everyone hates it!)

        – Sort of a basis for the previous two points. Code requires lots of annoying details. For example if you want to write a paper on some aspect of 3D graphics, your implementation will necessarily have to include all of the setup and initialization code for DX/GL, input handling, texture loading, etc. That can add up to pages and pages of code that’s completely irrelevant to your paper.

        I think things are changing, mostly due to the availability and ease of using the internet for information transferal. As noted in the first point, a simple web link can net you thousands of lines of implementation code for a single sentence in print. Places like arXiv allow more and more papers to be distributed than traditional journals ever could hope to do.

        So that solves problem 1. Problem 2 is a social issue — we’re 500+ years since the time when papers could make random unjustified claims and still be taken seriously. Everybody today has to do the grunt work (or hire a grad student to do it!) Regardless of what field you’re in, you’ve have to do the work to prove your concept using some form of rigorous experimental process. So my point is that these implementations exist and its just a matter of coming to terms with the idea of distributing them (which in turn means making sure your code is clean enough to be shown to other people and so forth).

        Problems 3 and 4 are not things that can ever be solved. As long as we’ve got the ability to choose from a wide array of hardware, software, languages, etc, they will always be problems. That said, it tends to be easier to translate code into other code than it is to translate math into code, so having AN implementation is still better than none in my opinion!

  12. Mellowcow says:

    It’s perfectly reasonable you’d want to work on your project alone. What I hope though, is that people who then purchase your labor of love will be able to view and change the source code, so we can keep “Grandroids” alive and thriving beyond the dimensions of Creatures and maybe even after you yourself aren’t actively involved anymore. Every child has to move out of their parents’ basement some day, right? 😉

    • stevegrand says:

      Ha! Depends if they can afford their own rent! 🙂 If I stop being actively involved then I’ll definitely release the source, I promise, but somehow or other I have to make this work as a business. This is the only time I’ll ever be able to ask for charity like this and I’m really grateful to everyone for investing in my project and making it possible for me to carry on with my life’s work, but that’s exactly what I need to be able to do – carry on. I’ve put years of work into this already, so it’s not like I can take the kickstarter money, work incredibly hard to write hundreds of thousands of lines of completely unique AI code and then just give it away so that anyone can do what they like with it. Where do I go after that? I have no pension, no savings, no property and I’m not a kid any more, so it’s important. So what I want to do is open the thing up as much as possible to you enthusiasts, while still retaining the opportunity to sell enough copies of the core product to the rest of the world to be able to carry on with this ridiculous, stupid vocation of mine.

      I do understand the concern. Actually, I think people are underestimating how powerful the API will be and overestimating what they’d be able to do with the raw source code if they had it, but we’ll see. I really don’t know how to deal with all this yet. I’m just relieved I’m not going to go bankrupt in a few weeks after all. Let me write the damn thing first and then we’ll figure out how to please everyone without me having to step off the edge of the Grand Canyon in despair! I’m not going to let it die like Creatures – that’s part of the reason I’m doing it this way and not signing another pact with the devil / talking to venture capitalists. I want to continue making cool things and inventing new kinds of artificial intelligence for as long as my brain holds out. In the mean time, if anyone really wants to play around with source code that I’ve written, rather than write their own, then be my guest – there’s half a game up on sourceforge already. I decided to drop it because it wasn’t going to work commercially, amongst other more personal reasons, so Tom Barbalet persuaded me to make it open source, but I don’t think anyone’s even touched it. I can’t afford ANY time to support its development, so you’d be completely on your own (it’s well commented), but it’s a couple of years’ worth of work just sitting there rusting. http://sourceforge.net/projects/simergy/

      • Vadim says:

        I actually tried to give Simergy a try. But it’s all full of DirectX, and there’s no support for that in Mono on Linux, and apparently it’s not planned at all either. And I don’t have any Windows machines left. So the attempt ended at that.

        But since Grandroids is probably going to be something along the same lines, I’ll have to figure something out eventually.

      • stevegrand says:

        Grandroids isn’t at all the same, Vadim.

        Simergy is written in DirectX partly because I’m a Windows programmer, like the majority of commercial applications programmers, partly because the concept for the game required low-level access to the 3D pipeline in order to create things like detachable frame hierarchies, and partly because all the 3D engines out there at the time were a pile of crap – unfinished, buggy and badly documented.

        But Grandroids is based on Unity3D, which in turn is based on OpenGL and Mono. It’s an excellent, stable commercial engine with good documentation and support, and it does what I need it to do for this game, which is slightly (although not much) more traditional in computer science terms than Simergy was. So the 3D code in Grandroids is a piece of cake, relatively speaking, which is just as well, because the AI is horrendously complex instead.

        But I don’t really understand why you say you’ll have to figure any of this out. You don’t need to care what the 3D pipeline runs on, as long as Unity ports their engine to Linux, which I understand they’re now working on doing. I have a Linux machine here, so I’ll co-develop it on Linux as soon as a Unity runtime is available, and I’ll try it on Wine shortly too.

      • Vadim says:

        Good to know 🙂 And awesome to hear that Unity3D is working on Linux support.

        The “figure it out” thing was about my personal setup, in the case that wine, mono and VMs fail to work.

        I ended up specializing on Linux system administration and application development, so I can quite seriously say that the last Windows version I really used was Win2K. I did use XP for a while but in a very minimal manner (to ssh into Linux machines, basically), and haven’t used anything newer for more than maybe 15 minutes total.

        Over the years I came up with a development setup I really like. I currently use both a desktop and a laptop for development (often at once), and both of them have a RAID + LVM + full disk encryption setup that would require complete reformatting to get Windows on there at all. It’d take me at least a weekend to do that, and it’d almost necessarily make some things I currently like doing inconvenient.

        So I’m at the point that if Wine doesn’t work, Mono doesn’t work, and a VM doesn’t either I need to make a serious time investment to get an application to run at all, and after that, just having to boot Windows is going to be inconvenient.

        Sure, I can and will do it if really needed, but it’s a rather involved thing that will require time and thinking how to set everything up.

      • stevegrand says:

        Ah, I see. Yes, all my machines have been Windows until recently, but it’s a lot cheaper to move from Win to Linux than it is to go the other way! I’ll keep my fingers crossed that a native Linux version of Unity comes out within the year – someone pointed me to a post where they let it slip they’re working a “preview” version right now. There’s a long time between now and when my game will be finished, so there’s a good chance it’ll work out. Porting the whole project to Linux ought to be just a matter of clicking on a different BUILD button, like it is already to flip between Windows and Mac.

      • Vadim says:

        Very good to hear that about the preview version 🙂

        Say, are you planning an Android version? Unity3D seems to support it.

      • stevegrand says:

        Android, too? Heh! Gimme a chance!!!!! 🙂

        I’d like to do some neat things with iPhone next, possibly. It’s be fantastic to carry one of your creatures around and let it see the real world through your phone camera, feel every bump and tilt through the accelerometers, etc. I don’t know anything about the Android platform because I have an iPhone, but I’m sure there are cool things that can be done. But right now I’ve got more than enough to worry about, and I’ve no idea how much computer power the creatures are going to require yet, so I’ll worry about mobile platforms later. One step at a time!

  13. John says:

    Hi Steve,

    I am not sure if you go back and check comments from older posts or whether you are notified when you receive a new comment from older posts, but just in case you do not, I have added a comment here in regards to Brainstorm #4;

    In this post you wrote;

    “So, finding the best mechanism for projecting n-dimensional space into two or three dimensions, based on the statistics and salience of stimuli, is part of the challenge of designing an artificial brain. That much I think I can do, up to a point, although I won’t trouble you with how, right now.”

    Can you give any hints (or direct me to a book or journal article to read) on how you achieved this?

    Thank you for your time, I am enjoying following your journey.

    • stevegrand says:

      Hi John, yep, I got your comment. First comments are held for moderation and I only got it as I went to bed last night, so that’s why you couldn’t see it.

      > Can you give any hints (or direct me to a book or journal article to read) on how you achieved this?

      Well, the way I’m achieving it in this project is a bit hard to explain without a lot of background context. I have some specific requirements, so I had to work out a mechanisms of my own. But for a well-established semi-abstract technique you could look at Kohonen Nets or google “self-organizing maps” for alternatives.

      In biology, the way orientation selectivity in V1 becomes self-organized has been looked at here and there, although I don’t have any references. I did that myself using my Lucy robot and it’s really not too hard. Basically I took a bunch of neurons with initially random input pattern distributions, then pointed lucy at some moving grid patterns (or the natural world works almost as well) and made the neurons compete/collaborate for the right to become better tuned. Mostly the neurons compete, so if neuron A happens to fire a little bit better than the others to a given line stimulus at a given angle and location, I designed it so that it would suppress other neighboring neurons over a moderate area. The more a neuron fires, the more it tunes its inputs towards the pattern of input that made it fire. This therefore makes it even more likely to win next time round, and the others less likely. So neuron A gets better and better at recognizing that particular stimulus, while the others get worse. But in getting worse at that stimulus, they become better suited to others, while A becomes so tightly tuned to the first stimulus that it fails to fire for others. The net result is that neurons tend to develop unique tunings (over a moderate area, in the case of V1, because you still want other neurons to respond to the same line angle in other regions of the visual field). But that alone isn’t enough, because although it causes different neurons to take up unique patterns, they’re randomly scattered. So what I did was make it so that neurons a reasonable distance from A were inhibited when A fired (and vice versa), but neurons very close to A were enhanced. This is called a Mexican Hat function because of its shape – neuron A causes lateral excitation to near neighbors (the hat’s top) and lateral inhibition to more distant neighbors, fading off to zero over a longer distance (the brim). I had to fiddle a bit, using facilitation rather than excitation, but that’s a detail. The point is that if a neuron recognizes an input angle moderately well AND some of its closest neighbors are showing a tendency to recognize it too, but not enough to make them fire strongly, then it gets stronger, because they encourage it. So, neurons compete for the right to represent a given angle, but they’re more likely to represent angles that are similar to their close neighbors. Each neuron thus “wants” to be *similar but not the same* as those around it. The result is an “orientation whorl”, in which each angle is represented by a unique cell and adjacent cells represent adjacent angles in a smoothly rotating pattern). This is what you find when you look at real visual cortex.

      But as I say, this time I’m having to do something more subtle, because of the structure of this partlicular “layer” of my neural network and because of some specific requirements for the way things have to organize. I was stuck on this for months, but I think I have a solution now. I’ll explain it along with everything else on my website for backers of the game, but it’ll take a while to get to that point because there’s a lot people need to understand frst. But the good news is that you can figure these things out from first principles – personally I don’t even try looking at the literature, because other people’s solutions always come with assumptions or expectations that don’t quite fit my own, and it would be a gradually worsening kludge if I just bolted other people’s ideas together. Anyway, competition across the network is the key. Each neuron should be trying to find its “lowest energy state” – the place where it’s left alone to do its own thing, feels among friends and is well separated from enemies!

      Hope that helps a bit. Why are you asking?

      • John says:

        Thank you very much for your response Steve, last year I had an epiphany, a few events all transpired against me and I found myself in a position where I could think up a really really really simple learning machine (which in all probability will not work, although I did draw it up on some paper and showed it to a person who had a background in Engineering – they suggested it was not very energy efficient). As I did not have a strong background in electronics or computing I used water to design my “learning machine” partly inspired by the MONIAC computer (or Phillips Machine). Since then I have become a dabbler in AI, more reading then dabbling (so you make a good point, I should start to think about this from first principles) and fortunately for me, I found your books first! After reading “Growing up with Lucy” I attended a conference on AI to see how academia was approaching the same topic, and what a contrast! Your approach is much much more intuitive.

        But to answer your question (sorry for the sidetrack), in Creation I read the approach you were taking in creating Ron’s mind and you did mention this problem of having a massive amount of information to be projected onto poor Ron’s brain. I suppose when you left that carrot “That much I think I can do, up to a point, although I won’t trouble you with how, right now.” dangling there my curiosity got the better of me.

        All the best with your project!

      • stevegrand says:

        Oh, wonderful! There’s a MONIAC in the Science Museum in London – fantastic thing. I love analogue computing. Energy efficiency my foot! Who cares about energy efficiency? If you can cause water to learn, that’s a work of art! Go for it!

        Sorry to hear about the transpiring events – I know the feeling well. Events do have that habit.

        It never occurred to me that my statement about SOMs was a dangling carrot! I haven’t actually tested my solution yet, but I’ll be sure to document it eventually (if it works!). Actually a journalist was asking me about it the other day. It is a bit hard to explain though.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: