Sunday, February 26, 2006

Augmented Reality - It is a reality, indeed

For a long time, a set of new technologies, like Virtual Reality and Augmented Reality, have been introduced. We have knew about them from many, many years.

I am now involved in a company that works with Augmented Reality technology. I will talk about it in a future post, but just say it is something incredible, and the most important thing, USEFUL.

This was sometimes the problem. Great technology, impressive results, but no interesting and REAL application. But now, we are in a very interesting moment, some real applications of, e.g., Augmented Reality are being a reality.

Augmented reality (AR) is a field of computer research which deals with the combination of real world and computer generated data. At present, most AR research is concerned with the use of live video imagery which is digitally processed and "augmented" by the addition of computer generated graphics. Advanced research includes the use of motion tracking data, fiducial marker recognition using machine vision, and the construction of controlled environments containing any number of sensors and actuators. Wikipedia definition of AR.


I have found two videos. Both quite interesting, and both describe in images, better than words, what Augmented Reality is.

In the first video, a demo of the company Total Immersion. Although no real or useful application is shown, you can guess some good fields in which this technology can help a lot (education, simulation etc.)

(if you have bandwidth problems, watch all the video until the end, and press 'play' again; normally the cache is going to help you the second time)



In the second one, a new concept called 'augmented map'. You can find more information here.

Thursday, February 23, 2006

Do you want to see the future of graphic user interfaces ?

Just without words. Just enjoy !!

(if you have bandwidth problems, watch all the video until the end, and press 'play' again; normally the cache is going to help you the second time)



Here, more information about the authors:
Multi-Touch Sensing through Frustrated Total Internal Reflection Detecting multiple finger touches on a rear-projection surface. We introduce a simple technique that enables robust multi-touch sensing at a minimum of engineering effort and expense. It relies on frustrated total internal reflection (FTIR), a technique familiar to the biometrics community where it is used for fingerprint image acquisition. It acquires true touch information at high spatial and temporal resolutions, and is scalable to very large installations. Han, J. Y. 2005. Low-Cost Multi-Touch Sensing through Frustrated Total Internal Reflection. In Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology www.advancedgui.com



From the MIT:

Wednesday, February 22, 2006

An introduction to algorithm complexity analysis (I)


I cannot understand how many problems a lot of students have in learning the principles of algorithm complexity. So, finally, I have decided to prepare a short introduction and some tricks in order to help them.

In a previous post, I described some of the strong points in order to take into account the complexity analysis before coding. In this post I'm going to describe the four basic steps you must follow in order to study the complexity of an algorithm. In future posts, I will provide more details and examples for each step.

The first thing to take into account is the difference between efficiency and effectiveness. Effectiveness means that the algorithm carries out its function, it is correct. But when we talk about efficiency, we are looking for something else ... we want a correct algorithm and with the best performance.

Basic steps in complexity analysis of an algorithm

1. Select the computational resource you want to measure. Normally we have two options: time and memory. But other studies can be undertaken like, e.g., network traffic.

2. Look to the algorithm and pay attention to loops or recursion. Try to see the variables and conditions that makes the algorithm to work more or less. Sometimes is one variable, some times several ... This is going to be our size of input. Remember that with complexity analysis we are interested in getting a function that relates the size of input with the computational resource.

3. Once we have the size of input of the algorithm, try to see if there are different cases inside it. Normally we have to pay attention to the best, worst and average cases. What are 'cases' in this context ? Circumstances in which the algorithm can work differently. These cases NEVER depends on specific values (high or low) of the size of input. NEVER. To work with an array of 1 million components is not a 'worst case', is just a specific size of input which is going to have a resulted time or memory consumption.

To have different cases means that you are not able to provide just one function that relates the size of input with the computational resource. Probably, having for example a worst and best cases means that you are going to have two different algorithm behaviours. It is like having two algorithms inside the algorithm.

So, in the definition of the different cases you are not allowed to use the term 'size of input'. It has to be another thing. Look the loops or recursion, and try to see specific conditions that make the loop or recursion to finish before scheduled, before the end of the input range. Normally, there you have the clue of different cases. An example ? The linear search of an element inside an array. In such a case, if the element we want to look for is not inside the array, the algorithm, in order to provide a negative answer, has to go from the beginning to the end of the array. That's the worst case. But if the element we want to look for is in the first position of the array, then there is no iteration. We find the element immediately. That's the best case. Note that the 'size of input' of this algorithm, the number of elements of the array, is not part of the definition of best and worst cases.

Take into account that it is not only important to detect these cases, but to be able to define them, to describe in which circumstances we are in front of each case.

4. Now, for each of the cases our algorithm has, we have to count computational resource consumption. If we are studying temporal cost, for example, then we have to count instructions (or calls in a recursive algorithm). What we need is the function which relates the size of input with the computational resource. And in this function we are interested in its highest degree, i.e., we are interested in its asymptotic profile (how the algorithm works for high values of the size of input). At the end of this step, the important thing, rather than follow a correct notation, is to be able to show if the algorithm, and for each case, is LINEAR, QUADRATIC, EXPONENTIAL, LOGARITHMIC, etc.

These are the basic steps to follow in order to face the algorithm complexity analysis. In future posts, I will try to provide more details for each of these steps and some examples.

Sunday, February 19, 2006

The cheapest Tablet PC in the world !


Do you like Tablet PCs ? They're more than beautiful technological toys. The problem is that they're quite expensive. But there is a low cost solution.

Recently, I have bought a Pen Tablet. Particularly, a Genius MousePen 8x6.

My motivation was to get rid of notes, sketches ... and hundreds of papers I can use every week to support my classes, research and different activities.

The result is fantastic. But ...

I had to change my Windows version, from XP Professional to the Tablet Edition 2005. Why ? In order to take advantage of the beautiful characteristics for hand-writing only available in such operative system.

With Windows XP and the tools and plug-ins of the pen tablet you can annotate, but with the tablet edition a lot of new functions are available in products as Ms Office, OneNote etc.

Now, I'm enjoying the good things of Tablet PCs but at the best price. And not only on my laptop, but in my desktop computer as well.

Saturday, February 18, 2006

Algorithm Complexity - don't forget it, please



It is interesting to see how fast the software engineers are able to forget everything they have learnt about algorithm complexity. Personally, I spent a lot of time trying to show the importance of complexity to my students. Complexity is not only important as a way to evaluate and select the best algorithm for a specific problem, it is a way of thinking...

Complexity analysis is engineering methods applied to programming

With the new languages, new platforms, new and powerful computers... it is not as important, as it was, to think before coding. We code now the very first thing we think. We execute... and, if there's no error, that's all.

We have powerful libraries and frameworks. We don't need to implement from scratch a stack, queue, list, hash table, etc. We just use the implementation that comes with our environment, and sometimes we think that's enough. And of course, it isn't.

Do you know which is the complexity (temporal and spatial) of the different methods or functions of the library you are using ? Even in the case you know them, have you considered them in your proposed solution ? Sometimes you have several alternatives, did you consider them ?

Did you think about complexity the last time you implemented something ?

Computational resources ... With the new technologies perhaps we have to change the traditional approximation to complexity. Traditionally we speak about temporal and spatial cost as the main computational resources (memory and time). But... with a client/server approach, with distributed or grid systems... do we consider the network traffic ? It's just an example.

UML and all the new methodologies for analysis and design are forcing us to think before facing the coding stage. But, once we are coding, and I'm not talking only about Information Systems but any kind of application, do we apply what we have learnt about complexity ? Are we forgetting what makes us engineers when we code ?

Answer these questions:

  • Is it worthwhile to sort an array of elements before undertaking some searches on it ?

  • Is it better to use either a binary search tree or a hash table ?

  • Is Quicksort your best choice whenever you want to sort out an array ?

  • Is it better to get a/some table/s of a data base from the server to the client and afterwards carry out the queries ?

If you have answered these questions with anything different to a 'depends on', may be you have to review your information about complexity.

Wednesday, February 15, 2006

A new MFC development - 3D Biosignal Viewer

I have recently started a new project using MFC. Yes, it is still alive.Why in 2006 start something based on MFC ? Because OpenGL, basically.

I like C# and .Net, but I don’t like the wrappers I have seen for OpenGL so far. The best seems to be the Tao. I prefer to see the future of OpenGL inside Windows Vista before deciding to change to C# + OpenGL.

Another thing to take into account is if managed code is fast enough in order to develop real-time applications, mainly when we talk about computer graphics. Hidden code, garbage collector… uhmmm…

The application, called 3D Biosignal Viewer, is now only able to visualize two-variable functions and height maps (using techniques that are similar to the ones used in 3D terrain visualization).



The IDE and platform I’m using is Visual Studio 2005. A fantastic tool. A bit slow on my laptop. The MFC version seems to be 8.0, with some differences when the wizard template is created. Some improvements in this new version ? I don’t know which is the continuity of MFC, but it is still a good option for OpenGL developments.

On the other hand, C++ makes me feel more confident about performance.Which is going to be the future of C++ ? Are C# and Java going to kill it ?

My first post

This is my first post of my new blog. I am sure that nobody is going to read this, but who knows...
When I decided to write a blog, my motivation was double. First of all, I think it is a good chance to improve my always poor english. Second, well... it is a good way to have a kind of register of my thoughts related to computer science, computer graphics and software development.
So, let's start with it.