Why is is so hard to develop a game console emulator?

  • Why is is so hard to develop a game console emulator? Mike

    I have always found emulators rather fascinating. I would seriously like to create an emulator for an older ganeration console or handheld. It would help me gain a greater appreciation of the hardware and the games which were created for that hardware. However, people are always saying how hard it is and that I should not even try. I would like to know why that is.

    Also, I would like some suggestions on a good place to start and where can I find the information I need?

  • SO Question

    This seems to be a popular resource about how they work.

    TL;DR - The architecture is totally different, and this takes a lot of parallel resources to achieve the original architecture.

    The CPU architecture for game consoles is often somewhat exotic compared with your average desktop machine. Emulation means to perform in software everything that the original hardware did. That is, while the original console may have had dedicated graphics, audio, etc. chips as well as a CPU with a different instruction set, the emulator must perform all the functions of these parallel resources at speed.

    Unless the console's GPU is old, it almost certainly must be emulated on the GPU of the host machine, as modern graphics cards, even cheap ones, have many times the throughput (for graphics workloads) of even the most expensive multicore CPUs. Compounding this difficulty is the fact that communication between CPU, GPU, any other onboard DSPs, and memory was probably highly optimized on the console to take advantage of the specifics of the hardware configuration, and therefore these resources must be rate-matched as well.

    Compounding all these difficulties, usually little is known about the specifics of the console's hardware, as this is kept very much under wraps by design. Reverse engineering is getting less and less feasible for hobbyists to do.

    To put things into perspective, an architectural simulator (a program which can run, for example, a PowerPC program on an x86 machine and collect all sorts of statistics about it) might run between 1000x and 100000x slower than real-time. An RTL simulation (a simulation of all the gates and flip-flops that make up a chip) of a modern CPU can usually only run between 10Hz and a few hundred Hz. Even very optimized emulation is likely to be between 10 and 100 times slower than native code, thus limiting what can be emulated convincingly today (particularly given the real-time interactivity implied by a game console emulator).

c++ architecture hardware
Related questions and answers
  • that make really hard to work with when coding some functions that use them. I was thinking of making ie. SimpleMesh and HierarchyMesh objects, which will also require that the renderer can deal... shader in every node. Other option I was thinking was making some helper functions to deal with the simpler cases, which would set some default parameters and would ask only the most basic ones... animation I would end up storing a lot of data that I don't really need, even if I'm indexing node and frame data when saving and then store the hierarchy with the indices to the actual data. I don't

  • different settings of stencil buffer, but always I was getting sth like that. Here is question: 1.Am I setting stencil buffer wrong? 2. Are there any other simple ways to create contour on such objects... stencil buffer, to eliminate this problem, but I got sth like this(contour is green): http://goo.gl/OI5uc (sorry I can't post images, due to my reputation) You can see(where arrow points), that some parts of line are behind object, and some are above. This changes when I move camera, but always there is some part, that is covering it. Here is code, that I use for drawing object: glColorMask

  • of this then why would I be here asking this question? The reason is as shocked as I am about this. I don't like to lose. I have my own queries on how appealing this project would be to someone looking... for. By this I mean I've always heard that graphics programming is hard. So for someone with basic c++ skills and an OK level of discrete mathematics, is it feasible to complete this project spec... in the world. I'm OK at C++ but its one language I would really like to explore and make my primary language. (assuming there are jobs for grads in this area) This project will most likely define the career

  • version of both. This is so the permanent ones can just be loaded in and are always drawn, and the temporary ones only get drawn for a frame. This way I can create moving lines for debugging, like line of sight cones etc. EDIT - the naive approach would be to duplicate this code 4 times for each vector, but surely there is a better way? I don't think this is a hard thing to do, but trying to do... it all separately? Basically how would I extend this code to include more vectors of vertices? if (temporary2DVerts.size() > 0) { // create the vertex buffer and store the pointer into pBuffer

  • and the display. But I started thinking about the future of this engine as I already invested some spare time over a year and a half in it. Now I would like to duplicate a few routines into OpenCL kernels... be deformed or broken... ATM the storage is very inefficient. I thought of using a few mega arrays of GLfloat (Compatible with OpenCL) which would contain my structures depending on their size and pass... a little multiplayer side project running as nicely as possible. Note: I can't create an OpenCL tag so if someone else could and add it to the question it would be appreciated.

  • I am a second year, learning C/C++ and Java. In our first year of university we did basic C++, no classes. This year we have been introduced to classes and inheritance. We have not yet done pointers, virtual functions, abstract classes, operator overloading, exception handling, linked lists, stacks, queues, binary trees, graphs, etc. I am very curious, so I like to teach myself some things that we haven't yet done. I enjoy the technical side of things a lot, my friends think I am weird to want to learn more than I have to.. But heres my problem: Our C++ lecturer has decided to take

  • by the event to work out where everything was at that time, but it seem like a lot of work - probably because the way I have implemented everything on the update side - so is there a better way? I did think about adding a bulk of data on the generic object creation, but there might be a lag between the creation and the actual event - on Java, I have to go through JNI and the generic object is always created on the native side. So I feel that the timestamp is my best solution. If anyone has any better ideas, it would be great to hear them.

  • I have a very simple effect file shown below. I am using this to draw 2D lines, however it is not behaving how I expected and I can't seem to get my head round why. If I draw a line that goes from 0,0 to 100, 100 for example, I would expect it to draw a line from one corner of the screen to a little way in. I would expect it to treat these numbers as screen coordinates. Instead, the line is huge! A line of about 2 long fills the whole screen. Why is this? How can I modify my shader to 'think' in screen coordinates? // a struct for the vertex shader return value struct VSOut { float4 Col

  • I first created a game on the iPhone and I'm now porting it to Android. I wrote most of the code in C++, but when it came to porting it wasn't so easy. The Android's way is to have two threads, one for rendering and one for updating - this due to some devices blocking when updating the hardware - while the iPhone's way is to have one thread to do it all. My problem is that I am coming from the iPhone. When I transition, say from the Menu to the Game, I would stop the Animation (Rendering) and load up the next Manager (the Menu has a Manager and so has the Game). I could implement the same

Data information