Just came back from watching Indy IV.
************************
Very Minor Spoiler Alert,
that you probably already
know if you've watched any
of the trailers
************************
Sean Connery is *not* in the movie. They say he "died" or something, then go back to punching bad guys. Shia LaBeouf *is* in the movie, and he takes on the standard Indiana Jones role of runner-up, doing the 2nd half of all the good-guy action sequences.
I did think it was interesting, given how much they must have offered to get Harrison Ford to sign on for the film, and the fact that this is almost certainly the last Indiana Jones movie, that they didn't pull out all the stops and bring in Sean Connery, Gimli the Dwarf, and everyone else from the previous movies.
But I suppose the screenwriters were a little too smart for that, and a little smarter than many game developers. Indiana Jones has a pretty specific formula - you've got Indy in the center, a 2nd character for him to make snide comments to and to do silly 2-person co-op punching stunts with, and then one or two additional allies that mostly bumble around and remind you that he's not like the other "professors." This isn't an X-Men movie, and trying to have more than a couple substantial characters would take too much away from the action, especially if you're trying to tie everything up in a finale. For examples, see X3, Pirates 3, LotR 3 (even Tolkien made that mistake). The last Potter book is a brilliant counter-example, but they've already announced they'll have to take two films to do it justice, so I think it still proves my point.
But how often do you see a video-game sequel that for each new feature remembers to throw out an old one? Starcraft 2 comes to mind - I heard that the number of unique units was going to be similar to the original, in order to make each unit meaningful. But too often we view each feature as an incremental, always-positive improvement, and why throw out perfectly good code while you're adding more code? It's easy too forget the cost of complexity, the narrowing of demographics that occurs when your tutorial only covers the new features you've added for the sequel, and the pure intimidation factor of having 20 functions mapped to the controller, even if all of them would be fun on their own.
You're best off just finding the base formula that "works" for your game, and executing that, no more, no less. Hell, this advice ought to be heeded even if you're making the *first* game in a series. And best of all, doing less is cheaper, too!
Monday, May 26, 2008
Sunday, May 18, 2008
The Truman Show
The Truman Show (yes, haha) was on cable over the weekend. One of the scenes shows them setting up a street for Jim Carrey to drive to work. All the extras get into positions midway in the street and the shops, and don't "turn on" until he comes around the corner. That way they don't have to pay actors to continuously populate a town, they ferry them around and only pay for people in a small bubble around Jim Carrey.
Reminded me alot of streaming, open-world games - it's an almost exact analog equivalent for what we do in a streaming world. But in both cases - the only reason we go to such absurd lengths to start and stop pockets of reality around the protagonist is because of resource constraints. It'd be easier, and take less work and thought (although far more resources), to just fully populate the entire virtual world, 24 hours of every day. So while on the one hand, in both cases, we have a world that when perceived by the viewer, is extremely dense, populated, and realistic, we're actually cutting corners wherever we can behind the scenes.
I think this shows how much further we have to go, technologically, before we really "have enough" processing power, memory, etc. Everyone will be clamoring for the Xbox720s and PS4s as soon as new games are demoed, because out of the hundreds of corners we cut currently to simulate reality, some number of them can then be simulated fully. And we can never really say "technology has progressed far enough for photo-realism" until we're not constantly running around behind the scenes, moving everything in and out in a tiny LOD bubble around the player, a result of our pushing the available resources to their absolute limit.
I think players don't realize all the things they're missing out on, because they're not in the meetings where we say "that wouldn't be feasible," and then we trade manpower for CPU cycles and memory registers, doing our utmost to hide it all behind the scenes, and to avoid begging the question "why didn't you do..." But come the next 10x jump in performance, some number of those crazy ideas actually become feasible, and games get better.
Until I'm lazy and wasteful at my job, we don't have anywhere near enough technological resources available.
Reminded me alot of streaming, open-world games - it's an almost exact analog equivalent for what we do in a streaming world. But in both cases - the only reason we go to such absurd lengths to start and stop pockets of reality around the protagonist is because of resource constraints. It'd be easier, and take less work and thought (although far more resources), to just fully populate the entire virtual world, 24 hours of every day. So while on the one hand, in both cases, we have a world that when perceived by the viewer, is extremely dense, populated, and realistic, we're actually cutting corners wherever we can behind the scenes.
I think this shows how much further we have to go, technologically, before we really "have enough" processing power, memory, etc. Everyone will be clamoring for the Xbox720s and PS4s as soon as new games are demoed, because out of the hundreds of corners we cut currently to simulate reality, some number of them can then be simulated fully. And we can never really say "technology has progressed far enough for photo-realism" until we're not constantly running around behind the scenes, moving everything in and out in a tiny LOD bubble around the player, a result of our pushing the available resources to their absolute limit.
I think players don't realize all the things they're missing out on, because they're not in the meetings where we say "that wouldn't be feasible," and then we trade manpower for CPU cycles and memory registers, doing our utmost to hide it all behind the scenes, and to avoid begging the question "why didn't you do..." But come the next 10x jump in performance, some number of those crazy ideas actually become feasible, and games get better.
Until I'm lazy and wasteful at my job, we don't have anywhere near enough technological resources available.
Subscribe to:
Comments (Atom)