GUI Elements - How to go about them?

The Communist Duck
  • GUI Elements - How to go about them? The Communist Duck

    Note: I plan on making my own GUI system. It will be good for learning, lightweight, only have bits I need, ties in with the game, etc.

    I was thinking about how to do it. The elements I mean are:

    • Radio buttons
    • Enter text here boxes
    • Buttons
    • Sliders
    • Checkboxes

    I'm not looking as to how to make them, yet. But more of how they would go.


    I would have each state that needed stuff, so almost all, have a container of Elements. Each Element is-a GUI piece.
    These each have position, a graphic, etc.
    The bit I get more stuck on is the logic of them.
    My idea for that, to allow for modularity and simplicity was to avoid a MainMenuStartButton; a OptionsOneBackButton, etc. and instead be able to do Slider slider1; or Button start;.
    Instead, each would have a boost::function to a function in a GUI namespace, such like Gui::StartButtonClick, or GUI::ColourSliderHover.
    I was just wondering whether this would be exceptionally slow or not.
    And for that matter, is there any easy, simple way of doing GUI for games? No Qt, wxWidgets or anything.

  • GUI isn't an easy or simple problem, especially when you get into games and their desire to have dynamic, interesting menus.

    One "easy" thing you can do is try to use middleware. There's a question on that here.

    If you're going to roll it yourself, there are a few things I would suggest.

    • GUI elements generally have a "parent", either another GUI element or a "window", or something like that. Either by pointing up or having the parent point down means you can set state/position/etc on everything that's a child.

    • You can build GUI elements by composition. A lot of widgets have labels (pretty much all of them), and it might make sense to make the concept of a label a component or a child of your other UI elements than copy/pasting the code or trying to inherit from a base class with label functionality.

    • A lot of so-called specialized widgets are just buttons in disguise. Checkboxes are just buttons with states and special images. Radio buttons are just checkboxes that have meta-state (one and only one is active at any given time) with other radio buttons. Use this to your advantage. In one game I worked on, "checkboxes" were done with regular buttons entirely through scripting and art.

    • For your first project, consider taking the more direct route. Yes, make your actionable events delegates and assign what you need on element creation (i.e. onMouseButtonDown, onMouseButtonUp, onHover etc), but consider just hard coding what you want to happen otherwise (i.e. hover color changes, button press sounds) until you get something functional. Once you're at that point, consider making that behavior data (i.e. have a variable on button for "press sound", or maybe consider having components that your UI elements use to define behavior that you just attach to them when you create them.

    • delegates aren't that slow, you probably won't have so many widgets that it would be an issue, and menu code generally isn't that high impact anyway. I wouldn't worry too much about performance at this point in the game. Don't pick naive algorithms and profile once you get your code up and running.

  • Here's one example (source code and all) of a GUI built for OpenGL, specifically for Slick2D called Thingle which is a port of Thinlet.

    Another GUI for Slick2D was SUI that you might want to check out.

    I'd follow these examples and go with an XML driven GUI. Although these projects are old/dead, you might be able to pick up some ideas from them.

  • I would go for an immediate GUI, like this:

    The nVidia widgets are based on IMGUI (Immediate GUI) from MollyRocket.

    I think that's about as simple as a GUI can ever get.

    All depends on what your needs are.

  • In the game I'm working on, we have our own custom GUI framework. I've found it to be very simple, but still doing everything I want it to do. I don't know whether it's a "pattern" that many others use, but it works well for us.

    Basically, all the buttons, checkboxes, etc. are subclasses of the class Element; and elements have Events. Then we have CompoundElements (which are themselves a subclass of Element), which contain other elements. In each cycle, three methods are called: processEvent(event), tick(time), and draw(surface). Each CompoundElement then calls those methods on its child Elements. In those calls (particularly the processEvent method), we fire off any relevant events.

    All of this is in Python (a much slower language than C++), and it runs perfectly fine.

  • If you're going to need fairly sophisticated UIs, the best choice is probably to just embed a HTML renderer - you get all the common UI primitives you are used to, and you can trivially add complex UI to your game that will behave the way users expect, look great, and run fast (because browsers have pretty optimized rendering pipelines at this point).

    There are a few libraries out there for embedding Webkit in games: Berkelium is the one I recommend, and I hear Awesomium is pretty good (used by EVE Online) and so is CEF (used by Steam). You can also try embedding Gecko - it's a lot harder to do but has some cool advantages as well. I'm currently using Berkelium for all my game UI (it replaced a combination of custom UI elements and native UI, and dramatically reduced the amount of code I had to write to get things running).

  • I think one of the best ways to "figure it out" would be to have a look at how highly developed GUI frameworks work, such as Windows Forms in .NET.

    For example, they are all Controls, which have a location, size, list of child controls, a reference to its parent, a Text property for whatever the control may use it for, as well as various overridable functions and predefined actions.

    They all contain various events such as Click, MouseOver and Resizing.

    For example, when a child control is changed, it sends a notification up the chain that its layout has changed, and all parents then call PerformLayout().

    As well, auto-sizing and auto-arranging are common features of these GUIs, allowing programmers to simply add controls and the parent controls adjust or arrange them automatically.

  • For interests sake I thought I would throw out Scaleform. Basically the artists can use illustrator, photoshop etc, then the UI designer uses Flash to layout all the interactivity. Then Scaleform transforms it into code for your engine (well thats my understanding of it as I have not used it). Crysis Warhead used this system for their lobby.

  • Make sure you actually need a sophisticated GUI system for your game. If you're making a computer RPG then obviously it will be a requirement; but for something like a racing game, don't overcomplicate things. Sometimes just rendering textures (custom button images) and having some "if" statements with hard-coded coordinates in your input function can get the job done. A game state machine (FSM) helps with this simple approach. Or, somewhere in the middle, forget the complicated hierarchy and scene graph approach and just have a "Button" class which packages the above-mentioned texture and input checks into simple function calls, but doesn't do anything complicated.

    Nailing down your requirements is important. If you're unsure, just remind yourself YAGNI.

  • There have been some really interesting breakthroughs in mainstream UI toolkits over the past few years. Game UIs have been evolving too, but at a somewhat slower pace. This is likely because developing a full-featured UI subsystem is a significant undertaking in itself, and it's difficult to justify the resource investment.

    My personal favorite UI system is the Windows Presentation Foundation (WPF). It really takes the potential for UI differentiation to the next level. Here are some of its more interesting features:

    1. Controls are "look-less" by default. A control's logic and visual appearance are generally decoupled. Every control ships with a default style and template, which describes its default visual appearance. For instance, a button may be represented as a rounded rectangle with an outer stroke, a solid color or gradient background, and a content presenter (to present the caption). Developers (or designers) can create custom styles templates which change the appearance and (to an extent) the behavior of a control. Styles can be used to override simple properties of a control, like foreground/background color, template, etc.; templates change the actual visual structure.

    2. Styles and templates may include "Triggers". Triggers can listen for certain events or property values (or combinations of values) and conditionally alter aspects of a style or template. For instance, a button template would generally have a trigger on the "IsMouseOver" property for the purposes of changing the background color when the mouse hovers over the button.

    3. There are a few different "levels" of UI elements ranging from lighter weight elements with minimal functionality to full-blown heavy-weight controls. This gives you some flexibility in how you structure your UI. The simplest of the UI element classes, UIElement, has no style or template and only supports the simplest of input events. For simple elements like status indicators, this functionality might be all you need. If you require more extensive input support, you can derive from FrameworkElement (which extends UIElement). Traditional widgets generally derive from Control, which extends FrameworkElement and adds custom style and template support (among other things).

    4. WPF uses a retained, scalable, resolution-independent vector graphics model. On Windows, WPF applications are rendered and composed using Direct3D.

    5. The introduction of WPF included a new declarative programming language called Xaml. Xaml, based on Xml, is the preferred language for declaring UI "scenes". It's actually pretty slick, and though it may appear similar at first glance, it's fundamentally different from existing languages like XUL (and much more powerful).

    6. WPF has a very advanced text subsystem. It supports most if not all OpenType font features, bidirectional ClearType antialiasing, subpixel positioning, etc.

    "Ok, great, now how is this relevant to game development?"

    Back when WPF was still in development (and known as "Avalon"), I was taking a video game design course at Georgia Tech. My final project was a turn-based strategy game, and I wanted a very capable UI. WPF/Avalon seemed like a good route to go because it had a complete set of full-featured controls and it game me the capability to completely change the look and feel of those controls. The result was a game with a beautiful and crisp UI with the level of functionality that you normally only see in full-fledged applications.

    Now, the problem with using WPF for games is that it's fairly heavy-weight and renders in its own "sandbox". By that, I mean there is no supported way of hosting WPF within a Direct3D or OpenGL game environment. It is possible to host Direct3D content within a WPF application, but you will be limited by the framerate of the WPF application, which is generally lower than you would want. For some types of games, like 2D strategy games (screenshot of my current WPF game project), it could still work great. For anything else, not so much. But you could still apply some of WPF's more compelling architectural concepts in developing your own UI framework.

    There is also an open-source, unmanaged C++/Direct3D implementation of WPF called "WPF/G (WPF for Games)"]( that is specifically designed for use in games on both Win32 and Windows Mobile. Active development appears to have ceased, but the last time I checked it out, it was a rather complete implementation. The text subsystem was the one area that was really lacking compared to Microsoft's WPF implementation, but that's less of an issue for games. I would imagine that integrating WPF/G with a Direct3D-based game engine would be relatively straightforward. Going this route could provide you with an extremely capable, resolution-independent UI framework with extensive support for UI customization.

    Just to give you an idea of what a WPF-based game UI might look like, here's a screenshot from my pet project:

    Screenshot from my ongoing WPF-based game project

c++ architecture gui
Related questions and answers
  • intelligence, aggressivity, etc..) ..another assets.. Can I kindly ask for a hint, how to construct my custom file format. How to organize data within my files, please? Does anoybody have a good adivce on exporting animation information, especially when the mesh changes its geometry? I would be thankful for advices that could point me into right direction. It would be nice to save some time instead...After careful consideration to use middleware, I have decided on creating my own 3d file format format to export meshes from 3D authoring application (Softimage) into my game. I will need to export

  • sort of designs other people used to overcome them, or would use. For a little background, I started working on it in my free time last summer. I was initially making the game in C#, but about 3 months... my classes and still have them do what I want. Here's a few examples of a dependency chain: I have a status effect class. The class has a number of methods (Apply/Unapply, Tick, etc.) to apply... related to the graphics library I'm using, but is more of a conceptual thing. In C#, I coupled graphics in with alot of my classes which i know is a terrible idea. Wanting to do it decoupled

  • GameSubsystems) can implement registerComponent(Component*). pro. Components and GameSubystems know nothing about each other. con. In C++ it would look like ugly and slow typeid-switch... a decision which Components to register (and how to organize them). For example, GameSubsystemRender can register Renderable Components. pro. Components know nothing about how they are used. Low...I'm creating a component-based game object system. Some tips: GameObject is simply a list of Components. There are GameSubsystems. For example, rendering, physics etc. Each GameSubsystem contains

  • I have two components I'd like to connect them to each other. PhysicalComponent containing rigid body(position, rotation, velocity) and is holding body from physics engine. GraphicsComponent onscreen representation(position too, rotation too). I'd like to sync this components, how to do it? Read position, and rotation in GraphicsComonent from Physical comopnent. Add one more component that sync them. But problem is that I want to change on screen representation (other class such as PositionInerpolator do it, and it can work only with GraphicsComponent), and it must change physical

  • I'm writing a game engine which is going very fine. However, I'm now posed with handling textures. My Engine is in 2D, for simplicity reasons mostly to get a good idea of how to work with OpenGL. I do this in C++, and I have a hierarchical set-up when it comes to classes. There's a base class and some other classes derive from that. So now I'm onto the part where I want to load up some textures... the tiles and what texture to slap onto each of them. Once this is done, it tosses the instance of the SceneManager to the OpenGL renderer. This might not be the best explanation of how my engine works

  • etc. are far worse in Flash, than they are in XNA/WPF.) Now, I'm aware that I could make my own engine that supports each of those platforms, but quite frankly, that would be too much work plowing.... I just do this for fun, and it would be even better if there were proper animation/particle editors available and if the engine I were to use, would be available for multiple platforms. (so more people can play my game once finished). I'd like to have it available on many mobile platforms aswell. (because I love touch input for some reason) I do know the XNA framework pretty well

  • ... that's the latest my graphics card supports), but I'm having trouble getting set up. Here's some information about my development environment: Windows 7 64 bit Eclipse Helios CDT Mingw Toolchain C++ I've tried building SDL 1.3 and following this tutorial. This ended up being a pretty big pain and I gave up amid a stream of compile errors - I'd prefer to not go this route if possible. I know also that GLEW/GLEE or some do-it-yourself extension function pointers will be required. So basically: is it possible? If so, what is the best/easiest way to make it happen? Thanks!

  • (indicating structure begin, structures end, etc.) and strings in a custom-defined format. What i want to know specifcally is how to do this with the STL and C++... Since the format is meant... to hold multiple classes and data in a single file have identifiable starts and ends to sections: such as the space in text files Maybe have it's own icon to represent it?? How do i do this in c++ ? ...Hey so i just learned about the i/o part of the STL, more specifically fstream. Although I can now save binary info and classes i've made to the hard drive, i am not sure how to define how the info

  • hey so I've decided to Code my own 2D soft-body physics engine in C++ since apparently none exist and I'm starting only with a general idea/understanding on how physics work and could be simulated: by giving points and connections between points properties such as elasticity, density, mass, shape retention, friction, stickiness, etc. What I want is a starting point: resources and helpful examples/sites that could give me the specifics needed to actually make this such as equations and required physics knowledge. It would be great if anyone out there also would give me their attempts