Saturday, September 19, 2009

Another box mental work-in-progress video

Had some issues (see my youtube channel) but finally have Box2D doing object seperation and collision detection for my work-in-progress indie game "Box Mental".

All I can say, is that Box2D rocks. I'll definitely be playing with the flash version for another prototype idea I've got. Next up for box mental? Well, need to update the navmesh generation so that it can handle arbitrary scenegraph nodes. Also need to add collision shapes to those various objects. Then work on some more interesting behaviours/animations. Plan on submitting to the sony compo this week too.

More soon.

Posted via email from zoombapup's post-o-matic

Wednesday, September 09, 2009

Another work-in-progress of Box Mental

I'm working on the zombie behaviours still. Here's a work-in-progress video that shows them in action. Zombies = dark guys, humans = light guys.

Animations totally messed up there, so I'll be working on that next. I took some time to tiddle with icons and the like, because presentation counts, even for programmer art! :)

Posted via email from zoombapup's post-o-matic

Tuesday, September 08, 2009

Blast from the past

Oh man. This takes me back. I can't even remember why I started looking for this. But here is a link to a screamtracker mod I did back in 1994.

http://cd.textfiles.com/soundmod1/music/s3m_mz/timeout.s3m

Plays pretty much as I remember it in VLC too!

Back then, I was heavily into the demoscene. Not that there was much of it in the UK. But I was coding my own assembler demos, tracking and all of the usual demo stuff. Man those were good times, I remember phone hacking to a site in the US that was supposedly the biggest underground BBS of the day. Me and a guy called Ben? were using bluebeep to fiddle around with Kingston Comm's phone network.

And that was 15 bloody years ago. FIFTEEN!

Damn.

Posted via email from zoombapup's post-o-matic

Monday, September 07, 2009

Update video - Follow Behaviour

Just added a new work-in-progress video to Vimeo. This time its an example of follow behaviour. Basically, the dark guys are zombies and the light guys are humans.

Now to work on some recursive dimentional clustering for collision detection between the agents.

Posted via email from zoombapup's post-o-matic

Thursday, September 03, 2009

Interesting idea for a game

So here's something interesting. Can you tell whats going on on this video?

The game is called His and Her Disconnected Conversations

Here's my theory of whats happening.

Its basically a sort of weird (and wonderful) take on a matching game. The left hand side is a guy. The right hand side is a girl. When you start, you get three different couples. The couples come together and start talking. Only they start in random positions and talk about different topics. Presumably if you are japanese, you can understand what the conversations mean, but they will appear disjointed because they will be out of sync between couples. You then swap the different characters around until you think all of the conversations are being held normally. I presume an early match adds extra score. It appears that once there is a confirmed match, the conversations accelerate to the end.

What a brilliant idea!!

Hats off to the creator.

Posted via email from zoombapup's posterous

New video

I just added a work-in-progress video of my current project.

Posted via email from zoombapup's posterous

Saturday, May 02, 2009

Using V8 for scripting in my game

I'm going to try and find time this weekend to throw google's "V8" Javascript engine into my game.

http://code.google.com/p/v8/

Basically, its a javascript implementation intended for embedding within applications (and its used by google in a lot of thier apps).

I figure using jscript as a language has some advantages for end users who want to play with modding the game. Alongside the benefits of having google funding the development of the script engine. Thier implementation seems easy enough and allows for some of my requirements (allows me to create/load scripts on the fly etc).

Reading up on thier license has got me thinking as well. Theyre using a BSD style license which is one I much prefer. I've been using OGRE along with a few other open source libs and honestly the LGPL and anything GPL based really irks me. I simply dislike DLL's and being platform dependant they annoy me because I then have to figure out how to stay within a LGPL license (which allows commercial usage with dynamic linking) on other platforms. After visiting the LGPL license website where its trying to push you towards GPL usage instead, I *really* dislike this idea that all software must be free. It seems like thier version of freedom means nobody earns a living.

Anyway, its enough to make me think again about my decision to use Ogre. Not that Ogre is bad and I know that theyve relaxed the licensing on Ogre to allow static linking too. But every library thats associated with Ogre ends up getting infected with the LGPL usage which means a dozen more DLL's to track and update.

I also plan on using webkit to run my UI, after considering lots of options I think ultimately it will be the most flexible. Using a typical web-browser based frontend means that I can literally prototype the UI using regular browser style tools. I could even use flash and flex to do it at a push.

Anyway, enough worrying, time to code!

Sunday, April 26, 2009

Brain update and moving site

Well, just sat in on an interesting discussion with Andy Shatz from pocketwatchgames.com over at aigamedev.com for a couple of hours. Fun interview and threw up some really nice details of Andy's games. Interestingly I'm doing something similar in terms of my games being AI based, but I think mine is definitely focussed in a different direction in that I'm really thinking of my game as a toy rather than a particular challenge oriented game.

I've been imagining this pure sandbox style gameplay where you can setup your own worlds and simply let them run, or you can introduce different characters and brains in order to test "what if" kind of scenarios.

Anyway, I made some good progress today. Particularly just after Andy's talk I finally realized why my agents werent doing anything I wanted. After spending a few hours throwing more debug views into the fairly lame current UI, I finally figured that even though I was reading different values from my agents blackboard, thats no good unless those values are initialized to something!

Each agent in the game is a big bag of different AI stuff, but in the main there is a behaviour tree and a blackboard and the BT acts on the data in the BB in order to process its logic and select actions. Its complicated a little by the emotional appraisal/arousal processes, but at the bare level its pretty simple. Only I forgot to initialize the blackboard!! :) specifically, I forgot to add the various initialized values for the agent to know wether he had already found somewhere to live and somewhere to work. So basically the agent never got into the daily rythm (the particular agent I'm working on now is called "worker" and does exactly that, he works a job, goes home, fulfills his basic needs for sleep and thats about it. Any spare time he gets he fulfills his need for entertainment as easily as he can, so usually that would involve sitting in front of a TV, but could involve almost any form of entertainment he actually likes.

Worker is an interesting starting point for splicing brains, because he is basically all of us. He is a drone. You hardly get any personality until you add in some other forms of behaviour. This is done by splicing other brains into the worker brain.

In other news, after a lot of mulling around, I think I'm just going to go with liquidweb for a while and see how things pan out. Which should mean a switchover from this blog up on blogger.com to a new spangly wordpress based blog on my own hosting. Although I might just leave this where it is and concentrate on the britishindie.com site exclusively for the project, havent decided there yet.

Wordpress seems like it'll be my platform for almost all content. I'm going to get a vbulletin license too so I can setup a forum, although they take a horrific amount of time to police for a one man band like me. I suppose I could code up some weirdo validation scheme at least so I dont get botspam (kinda like the old spectrum lenslok or something).

Look for BritishIndie coming soon!

.Z.

Monday, April 13, 2009

Finally exporting the character!

Been having a hell of a time trying to get a test character exported into the engine with a few choice animations. Mostly my lack of understanding of xforms and pivots and the like. But feels like we've finally broken the back of it (thanks to Jerry Waugh).

At this point, the next bit is to work on the texture swapping code and maybe add atlassing in there (the FPS is low because each body part has a unique texture and that ups the batch count and batch count kicks framerates in the nuts :)) Atlassing will allow us to have a large number of characters rendered with the same physical texture (its just a matter of putting all the face textures together and adding them into the atlas and fixing up thier texture coordinates). But I'm a bit loathe to do that right now, so much else to get done first.

Here's a shot of the test character.



And here's a shot of the typical RTS view:



Here's what you'll spend a bit of your time doing. Here I've selected a number of agents in the world and the second button on the top right will take you to the agent editor, here you will choose which brains the agents have, you can edit the agents visual properties, splice brains together etc. Mostly the game will play out a bit like a RTS meets a more god/sims game. So this is the RTS overview where you can get a feel for the world. I'll show the various camera modes in a video soon I guess. I need to work on the agent editor first. Expecting to have to handle many hundreds of brain types (and many user-written ones) plus user created agent textures in a flexible manner. Trying to think of a way that isnt just a simple list.

Here's how I envision the brain selection to look like :)



The point is that for the most part, the game involves you swapping brains around on characters and watching what happens. So making the brain swapping quick is the key here. The F key interface we used in worms seems like a really slick way of choosing from what is a rather large number of possible items. Given I'm hoping to have in the order of 200+ brain types, I'm hoping having a paged version of this type of F key interface will work well. Basically, you'll press the Tab key to bring up the brain picker (it slides in from the right), you then select the brain you want to use an implant and then choose an "implant selected" icon on the main UI (not there yet).

Other options will be to edit the character (select its textures), take screenshots and videos, build buildings, create and delete characters and freeze time. Of course I'll get an artist to redo the UI once I have it fully functional.

Anyway, thats enough for now. Need to go and make some coordination code work! :)

.Z.

Monday, March 16, 2009

New art style for "leaf" and behaviors

Getting prepared to head off to San Francisco for the GDC and thought I'd post this before I set off on that epic journey.



I've been toying with some art styles just for test purposes that allows me to experiment with the level of "realism" required for the behavioral/emotional side to demonstate that at least partially you don't require realism to allow for emotion to show.



So I came up with this iconic character style based on a number of influences. Its great in that its one style I can actually manage to produce myself! ;) Even the UV layout is easy enough that I can make "skins" for the characters pretty quickly. I'll get an artist to rework them once the world is populated and playing nicely.



So here is the great reveal :)



As you can probably tell, the rest of the world is a work in progress. I need to skin some city blocks up in a similar style to see how that would work. But the "boxes for everything" style certainly works. Plus the faces will be easier to animate using a render to texture for that part anyway. Composition for the win!

Till next time.

.Z.

Wednesday, February 18, 2009

An AI that can express emotions

I've been trying to get my plan together for our lecture at GDC: Here

There's plenty of academic stuff I can go over, particularly models of personality, mood and emotion that you see in a lot of affective computing literature. That of course is pretty useful information in and of itself (great to get going if you're interested in learning more for yourself).

But I really wanted to get something so compelling and convincing for people to see that they simply couldnt ignore the subject. Right now there are plenty of people who really can't see the benefit of having thier agents express emotion. I can understand where they're coming from, given that its additional production effort that is taking you into unproven territory.

I'm just not entirely convinced that I can produce this artefact that is so utterly compelling that it changes how many in an industry views thier own efforts. Sure, there are people like Wil Wright saying that we need to explore this kind of game-space more and he's a pretty smart cookie after all. But you know how hard it is to convince someone that games dont simply have to be about killing?

Given that the Wii has expanded the market into new areas, you would think more experimentation in what is arguably the most profitable game segment ever (the simulation) you would expect more. But visit any game trailer site and look at the number of simulation games that aren't based on war and you'll see where I'm coming from.

Imagine trying to pitch a game involving the creation of the AI equivalent of a romantic comedy in the vien of "four weddings and a funeral" and you'll maybe see my problem. I think games have the potential for social and emotional simulation that may well be a huge underinvestigated role for games as a medium, yet I'm just not entirely sure I can create something convincing enough that people will see what that might look like.

Right now my demo looks awful, has models for basically what they had in sims 1 in terms of AI and has no compelling behavior that people can draw a line and say "yeah, I can see that would be fun to play". Without that obvious fun factor, I just don't see the thing being truly convincing to the game development community.

I'm going to try and consolidate my thoughts on the link between emotion and obvious game-related behavior over the next few days. So I can think of specific use cases and the passage of code that elicits the correct behavior selection and action from the AI that will demonstrate the potential of modelling emotion to extend the depth of reactions at least. There are so many fundamental philosophical questions that this whole thing brings up though. I mean if you'd have asked my wether I would have been reading Descartes this time last year, I'd have said definitely not. Now I'm reading about neuroscience, philosophy, acting, emotion research, psychology and iconography in order to get a handle on the real "meat" of this problem.

I think I need a lie down :)

.Z.

Sunday, February 08, 2009

Behavior Tree XML

Previously I was working on parsing XML for the game entities and thought I'd continue through to actually parsing the AI setup from that effort. So now after a few hours work (which included a lot of refactoring and there's a ton left to do of that) I can now actually specify an XML file as part of the AIComponent specification in the GameObject's XML.

Here's a typical GameObjects XML:


Arrgh, it wont display XML! :)


Hopefully you can see here that the game object has a number of components. In this case, the AI component (a gamecomponent of type "AI") has a tree specified as another XML file. When I parse the AIComponent spec, I'm basically passing that xml file spec to the AIComponent, which instantiates the AIActor class with the given XML file, which then loads the Behavior Tree of the actor with it. Long winded, but certainly much more flexible than before.

Now I've got to refactor some of the blackboard code to do the same thing I think. Although I need to do more work there to allow arbitrary values to be added, queried and removed. CRUD as I believe the acronym goes.

So now I can play with different priority values easier. But its still not as flexible as runtime tweaking. I guess the next big step is to add Lua or some other scripting engine into the codebase. Can't say I'm a massive fan of Lua's syntax though, might try GameMonkeyScript or AngelScript.

The other option, is to make myself a network proxy interface and allow this kind of tweaking to work via the network. Essentially this means that I need an object attribute system (I already do) which then works transparently over a network, which is pretty simple. Then I just code a nice C# app that does remote inspection of the various objects+attributes and bobs your uncle. Or auntie if you have a weird family :)

Right, onwards..

.Z.

Thursday, February 05, 2009

Snow day fun!

Hmm, seems like a semi-productive day. Snowing outside anyway so I wasn't going to go out

I got an XML parser sorted out for the game objects in my engine. Basically all game objects are component based (i.e they use aggregation rather than inheritance).

So now I have a bit more flexibility as I can save my test worlds in an xml file and have the world loader class simply load the world.xml file (or whatever) that defines all of the agents and other objects in the game.

Next up is to do the same for the behavior tree itself. I've already done an editor of sorts that I can knock into better shape for the XML creation of a tree, so its just a matter of using the excellent XMLReader library to parse in the BT itself. (Incidentally, if anyone is looking for an XML class that does the business, I am preferring this one to TinyXML, which I've also used in the project).

Fun thing is that I got libCurl setup yesterday to pull a spreadsheet from google docs as XML, I can then feed this into the world loader class and theoretically I can actually distribute my game "levels" via google!

I also wasted an hour or two looking at the various video capture software to try and record a video of the app (its not really a game yet). Tried wegame, guncam, fastcampro but of course fraps is the only really useful solution outside dumping raw frames and using something like premiere to turn that into a video. Fraps did a bloody good job of recording 720p video from the game, but unfortunately windows movie maker bites at encoding. But it was good fun splicing a few shots together and titling it all. Although I think the footage looks bad enough that I'll keep it to myself for posterity.

I also packaged up a .zip file of the game today to see how trim I can get it. A quick purge of some of the nvidia texture library from my media folder and I'm down to an 8meg zip file. If I were a bit more aggressive about pruning I think about 5-6meg is probably achievable. Not the ideal download, but then most of it is Ogre libraries, which now I think about it, I can mostly do without.

Anyway, a productive fun day doing some general AI coding and engine schlok and having fun making dumb movies to amuse myself. Life is good when it snows!!!