Sunday, April 26, 2009

Brain update and moving site

Well, just sat in on an interesting discussion with Andy Shatz from pocketwatchgames.com over at aigamedev.com for a couple of hours. Fun interview and threw up some really nice details of Andy's games. Interestingly I'm doing something similar in terms of my games being AI based, but I think mine is definitely focussed in a different direction in that I'm really thinking of my game as a toy rather than a particular challenge oriented game.

I've been imagining this pure sandbox style gameplay where you can setup your own worlds and simply let them run, or you can introduce different characters and brains in order to test "what if" kind of scenarios.

Anyway, I made some good progress today. Particularly just after Andy's talk I finally realized why my agents werent doing anything I wanted. After spending a few hours throwing more debug views into the fairly lame current UI, I finally figured that even though I was reading different values from my agents blackboard, thats no good unless those values are initialized to something!

Each agent in the game is a big bag of different AI stuff, but in the main there is a behaviour tree and a blackboard and the BT acts on the data in the BB in order to process its logic and select actions. Its complicated a little by the emotional appraisal/arousal processes, but at the bare level its pretty simple. Only I forgot to initialize the blackboard!! :) specifically, I forgot to add the various initialized values for the agent to know wether he had already found somewhere to live and somewhere to work. So basically the agent never got into the daily rythm (the particular agent I'm working on now is called "worker" and does exactly that, he works a job, goes home, fulfills his basic needs for sleep and thats about it. Any spare time he gets he fulfills his need for entertainment as easily as he can, so usually that would involve sitting in front of a TV, but could involve almost any form of entertainment he actually likes.

Worker is an interesting starting point for splicing brains, because he is basically all of us. He is a drone. You hardly get any personality until you add in some other forms of behaviour. This is done by splicing other brains into the worker brain.

In other news, after a lot of mulling around, I think I'm just going to go with liquidweb for a while and see how things pan out. Which should mean a switchover from this blog up on blogger.com to a new spangly wordpress based blog on my own hosting. Although I might just leave this where it is and concentrate on the britishindie.com site exclusively for the project, havent decided there yet.

Wordpress seems like it'll be my platform for almost all content. I'm going to get a vbulletin license too so I can setup a forum, although they take a horrific amount of time to police for a one man band like me. I suppose I could code up some weirdo validation scheme at least so I dont get botspam (kinda like the old spectrum lenslok or something).

Look for BritishIndie coming soon!

.Z.

Monday, April 13, 2009

Finally exporting the character!

Been having a hell of a time trying to get a test character exported into the engine with a few choice animations. Mostly my lack of understanding of xforms and pivots and the like. But feels like we've finally broken the back of it (thanks to Jerry Waugh).

At this point, the next bit is to work on the texture swapping code and maybe add atlassing in there (the FPS is low because each body part has a unique texture and that ups the batch count and batch count kicks framerates in the nuts :)) Atlassing will allow us to have a large number of characters rendered with the same physical texture (its just a matter of putting all the face textures together and adding them into the atlas and fixing up thier texture coordinates). But I'm a bit loathe to do that right now, so much else to get done first.

Here's a shot of the test character.



And here's a shot of the typical RTS view:



Here's what you'll spend a bit of your time doing. Here I've selected a number of agents in the world and the second button on the top right will take you to the agent editor, here you will choose which brains the agents have, you can edit the agents visual properties, splice brains together etc. Mostly the game will play out a bit like a RTS meets a more god/sims game. So this is the RTS overview where you can get a feel for the world. I'll show the various camera modes in a video soon I guess. I need to work on the agent editor first. Expecting to have to handle many hundreds of brain types (and many user-written ones) plus user created agent textures in a flexible manner. Trying to think of a way that isnt just a simple list.

Here's how I envision the brain selection to look like :)



The point is that for the most part, the game involves you swapping brains around on characters and watching what happens. So making the brain swapping quick is the key here. The F key interface we used in worms seems like a really slick way of choosing from what is a rather large number of possible items. Given I'm hoping to have in the order of 200+ brain types, I'm hoping having a paged version of this type of F key interface will work well. Basically, you'll press the Tab key to bring up the brain picker (it slides in from the right), you then select the brain you want to use an implant and then choose an "implant selected" icon on the main UI (not there yet).

Other options will be to edit the character (select its textures), take screenshots and videos, build buildings, create and delete characters and freeze time. Of course I'll get an artist to redo the UI once I have it fully functional.

Anyway, thats enough for now. Need to go and make some coordination code work! :)

.Z.