Identifying connected lines drawn free-hand by a user

  • Identifying connected lines drawn free-hand by a user rawrgoesthelion

    I have a series of 'images' described by a mixture of connected lines and curves. Users will draw on the screen, free hand, and my goal is to break their drawing down into a series of lines and curves that can be matched with the 'images' in my set.

    For the sake of simplicity, let's assume this is occurring on a touch screen. These lines will be connected. Each time the user's finger moves, the dx and dy is recorded. The drawing is considered complete and analyzed when the user's finger leaves the screen.

    I'm having trouble figuring out a good way to break the user's drawing down into lines. Is there any well known approach to this problem, a C++ library that solves it, or any good articles/technical papers on how to achieve this?

  • I think you can find good resources by searching for articles about OCR. as far as I could understand your question your problem is a one of OCR sub-problems. you can also search for pattern recognition.

    after searching a little bit, I've found this article. I just looked at images (didn't read it at all) and it seems like it's providing algorithm to extract both curves and lines. and it's doing a good job!

  • To rephrase the original question:

    You have a series of points which represent samples of a continuous (but not necessarily straight) line, and you want to represent that line using fewer samples.

    Basically what you're trying to find is samples which don't contribute much to the overall shape of the line. The fastest way to do this, in my experience, is to, one at a time, calculate how much "error" would be introduced by removing each sample, and then actually remove the one which results in the least total error.

    You then repeat this process, for each remaining point. You continue finding "what's the least bad point to remove" and removing it, as long as the total error induced in the line is less than some maximum "error" threshold.

    Important that your error calculations need to be compared against the original line shape, NOT the progressively optimised shape, or else you can end up with the optimised line devolving away from the drawn shape.

    I have sample code (from years ago) available here:

    Look inside vpDrawable::OptimiseOutOneIndex() for my implementation of this process.

  • Recognising arbitrary shape vectors is hard!

    A simpler technique is to use a grid and count the edges crossed by the stroke drawn by the user. This list of edges is used as a "signature" which is compared against a dictionary of signatures of pre-defined shapes.


    1. Sub-divide the input area into a grid (eg: 3x3 with infinite edges), and give each edge a unique ID. The exact size and number of divisions depends on your implementation. More grid divisions leads to greater versatility at the cost of robustness. Sub-divided grid area with labelled edges

    2. The user draws a line over the grid to create a stroke. Note that the grid does not actually need to be visible on the screen, but doing so will assist in the usability of the interface. User-drawn line over grid

    3. The path taken by the stroke is analysed for intersections with grid edges. Each edge that the stroke intersects is added to a list of edges. This becomes the signature for the character. In this example, the "S" curve intersects with the edges Y12, Y7, X7, Y8, Y13, X14, Y14, Y9. Grid intersections

    4. The signature is compared against a dictionary of pre-defined signatures and the closest match (or matches) is returned.

    Usability considerations

    1. The user input will not be perfect and the search needs to cater for inexact matches caused by extra or missing edges in the input. One way to deal with this problem is to build a directed graph of the input. This will record the relationship of the points to each other. It will tell you that X7 comes before Y8, but after Y12 for example. The search then compares this against the graph of each character in the dictionary. Each node that matches is assigned a percentage based on the exactness of the match. The node (or nodes) with the highest total percentage matches is used as the overall match.

    2. Multiple variations of each character in the dictionary may also be helpful. In the example above, the signature is valid for the character when drawn from the upper-right quadrant, but it may be useful to have a signature for the same character drawn from the lower-left quadrant, which will essentially be the reverse of the example.

    3. Characters may be made of multiple strokes. For example if the user draws and upside-down "U" and then a "-" the algorithm may match this to two separate characters such as "n-", or it could combine the two strokes and recognise an "A". A time based heuristic could be used as the differentiator, so that multiple strokes in quick succession will result in one character, whereas strokes drawn with more time in-between will result in multiple characters.

    4. If the input system is to be used for hand-writing the number of potential characters and input styles can be large. A word dictionary and/or grammatical analysis could be used to find a set of characters which are most likely to occur next. This is intersected with the list of the matching character signatures to produce an likely candidate.

    EDIT: Additional considerations

    1. It may not always be desirable to show the grid during input. In this case it may be necessary to scale the stroke before applying it to the grid. Naive scaling would work but the results may be skewed by points which fall far outside the input region. A workaround is to find the centroid (the average x,y coordinate) of the input stroke, then calculate the average distance of each point in the stroke from this centroid. The stroke is then scaled based on this average distance.

    2. I posted another answer with a different approach using vectors. That algorithm also used the Levenshtein distance to match the character signature. The same approach can be applied to this algorithm during the search phase.

  • Here's an implementation of another algorithm:

    It works by decomposing the input stroke into a set of vectors which are multiples of 45 degrees. This simplified representation is compared against a database using the Levenshtein distance.

    * Note: the Levenshtein distance can also be applied to the other algorithm I posted here.

c++ input hand-drawn
Related questions and answers
  • on screen... Since the result was kind of hacked together, I am thinking that maybe I am performing some extra steps or extra rendering tasks that maybe are not needed, and are slowing down the game...I am quite new to OpenGL, I have managed after long trial and error to integrate Nehe's Cel-Shading rendering with my Model loaders, and have them drawn using the Toon shade and outline...); // Set The Color Of The Model ( NEW ) // ORIGINAL DRAWING CODE //Draw the model as an interpolation between the two frames glBegin(GL_TRIANGLES); for(int i = 0; i

  • every 3 or 4 X's down in the wall... I hope this is enough explination, Thanks for any Help, and LOOK AT PREVIOUS POST ABOUT COLLISION DETECTION FOR MORE SPECIFIC FUNCTION PROBLEMS otherwise, heres... = -stepSize;break; case 'D': xMove = stepSize; break; case 'P': getch(); break; case 'O': quit = true; default: validPress = false;} if(validPress... from it. As an added touch i have made it so after you collide while traveling down the randomly generated map (or rather as the walls move uppward while your character stays put) the X chars you've

  • I need to draw contour around 2d objects in 3d space. I tried drawing lines around object(+points to fill the gap), but due to line width, some part of it(~50%) was covering object. I tried to use stencil buffer, to eliminate this problem, but I got sth like this(contour is green): (sorry I can't post images, due to my reputation) You can see(where arrow points), that some parts of line are behind object, and some are above. This changes when I move camera, but always there is some part, that is covering it. Here is code, that I use for drawing object: glColorMask

  • I have a very simple effect file shown below. I am using this to draw 2D lines, however it is not behaving how I expected and I can't seem to get my head round why. If I draw a line that goes from 0,0 to 100, 100 for example, I would expect it to draw a line from one corner of the screen to a little way in. I would expect it to treat these numbers as screen coordinates. Instead, the line is huge! A line of about 2 long fills the whole screen. Why is this? How can I modify my shader to 'think' in screen coordinates? // a struct for the vertex shader return value struct VSOut { float4 Col

  • sort of designs other people used to overcome them, or would use. For a little background, I started working on it in my free time last summer. I was initially making the game in C#, but about 3 months ago, decided to switch to C++. I wanted to get a good handle on C++ since it's been awhile since I used it heavily, and figured an interesting project like this would be a good motivator. I've been... related to the graphics library I'm using, but is more of a conceptual thing. In C#, I coupled graphics in with alot of my classes which i know is a terrible idea. Wanting to do it decoupled

  • want it to do is keep producing bullets until the user releases their hand on the space-bar... here's the code I have so far: if(spacebar) // this bool variable is set to true once the user presses...) { once2 = 0; spacebar = false; } //This will eventually get better. My shoot function is horrible lol :/ } Does anyone have any ideas as to how I can accomplish this without...I've finally gotten this laser thing to work for my Space Shooter, and so far I've come across a slight problem. I've gotten the laser to position itself at the starting position correctly and move

  • I know that if you want to display a sprite on screen (in 2D) you can use glOrtho and essentially make 1 gl unit equal to 1 pixel, so when I plot out the vertices for say a 128x128 image (on a quad), I can define the vertices as -64/64, -64-64, etc and then when I map my texture coords to that quad, the image is displayed at a 1:1 ratio. However, lets say I wanted to not use glOrtho and wanted to have a perspective view, so I can combine 2D sprites with 3D models and whatnot? I'm at a loss on how to convert/set up the coordinates for the planes/quads I want to draw images to, in a way

  • I'm making a GUI API (in C++) for games which uses Allegro 5 as a backend. I looked at how GuiChan works and noticed that they intend for the user to override the paint event and do all the drawing... has corresponding bitmaps. I also have bitmaps for focused and disabled. The user can always choose to override the paint event and do it from scratch. Is this a good idea or a terrible one to use... border, back color, size, min / max size etc). Where I also differ from GuiChan is in the fact that instead of default drawing being rectangles, the widgets have bitmap pointers and if they are NULL

  • Camera rotation (OpenGL) paintstripper

    I am having trouble with a camera class I am trying to use in my program. When I change the camera_target of the gluLookAt call, my whole terrain is rotating instead of just the camera rotating like it should. Here is some code from my render method: camera->Place(); ofSetColor(255, 255, 255, 255); //draw axis lines //x-axis glBegin(GL_LINES); glColor3f(1.0f,0.0f,0.0f); glVertex3f(0.0f,0.0f,0.0f); glVertex3f(100.0f, 0.0f,0.0f); glEnd(); //y-axis glBegin(GL_LINES); glColor3f(0.0f,1.0f,0.0f); glVertex3f(0.0f,0.0f,0.0f); glVertex3f(0.0f, 100.0f,0.0f); glEnd(); //z-axis