Thursday, 25 July 2013

Android 4.3 released by Google

Posted by Mahesh Doijade
Android 4.3 released by Google, android 4.3

   Google announced today an incremental update to Android, by releasing Android 4.3 - an updated version of JellyBean, they call it an even sweeter JellyBean. Its perhaps due to the incredible new features which comes with it. It consists of several performance optimization and other exciting features for users as well as developers. Android 4.3 comes with restricted profile features which can be used for restricting app usage and along with content usage. So for instance, it can be used by a parent to set up profiles for specific family member.
   Also, games will perhaps look much better as android 4.3 comes with OpenGL ES 3.0 resulting into realistic 3D, high performance graphics. It contains Dial Pad Autocomplete, so now just start touching numbers or letters and the dial pad will auto suggest phone numbers or names. To use this feature, open your phone app settings and enable "Dial pad autocomplete".
   It comes with Bluetooth Smart support so your smartphones becomes Bluetooth Smart Ready. Bluetooth AVRCP 1.3 support is also there. There is also this new feature of Location detection through Wi-Fi, so now use Wi-Fi to detect location without turning on Wi-Fi all the time.
   Text input is improved and made easier due an optimized algorithm for tap typing recognition. Also, there is faster user switching, so switching users from the lock screen is faster now. Also there is support for Africaans, Swahili, Zulu, Amharic and Hindi language. 
   And here is a good news for nexus device owners. Google is rolling out Android 4.3 update immediately for Nexus 4, Nexus 7 and Nexus 10.

 

Read More

OpenMP Getting Started

Posted by Mahesh Doijade

 
openmp, OpenMP

What is OpenMP ?

OpenMP is an API for writing portable multi-threaded parallel programs. It consists of  specification of a set of compiler directives, environment variables and library routines carved to enable shared memory programming for C/C++ as well as Fortran programs. OpenMP stands for Open Multi Processing.

• OpenMP is managed by OpenMP Architecture Review Board(ARB).
In most of the cases it is much more easier to program in OpenMP than using POSIX threads(pthreads).
• It needs explicit compiler support, compilers such as GCC 4.2 onwards, Portland Group's pgcc, Solaris cc, Intel’s icc, SGI cc, IBM xlc, etc.

The beauty of OpenMP is that it enables one to carry out incremental parallelization of the sequential code. 


OpenMP Getting Started :

OpenMP is based on fork-join model, as shown in the figure. So, initially you have a single master thread which is doing the sequential part, which is forked into multiple threads whenever the parallel region is encountered. Once the work in the parallel region is accomplished, then the master thread again begins the succeeding work after parallel region.
OpenMP, openmp, OpenMP, openmp, OpenMP, openmp, OpenMP, openmp OpenMP Fork Join, OpenMP Getting Started, OpenMP Fork Join, OpenMP Getting Started, OpenMP Fork Join, OpenMP Getting Started, OpenMP Fork Join, OpenMP Getting Started, OpenMP Fork Join, OpenMP Getting Started, OpenMP Fork Join, OpenMP Getting Started
  • Initially to start with, the parallel region is been specified with the following compiler directive.
          #pragma omp parallel
          {
                 //code for work to be done by each thread to be placed here.
          } 
  • To set the desired number of threads for your code can be done through the library routine omp_set_num_threads() , and another way for setting number of threads is by setting the environment variable OMP_NUM_THREADS to the desired number of threads value.
  • To get how many threads are running in the given parallel region one can use the library routine omp_get_num_threads()  , its return value is number of threads.
  •  Each thread can get its thread ID amongst the team of threads in the parallel region by using the library routine, omp_get_thread_num()
  •  To synchronize the threads to carry out some work which needs mutual exclusion, one can use, 
        #pragma omp critical

        {               // Code which requires mutual exclusion
        }
         Another mechanism is to use library routine omp_set_lock(lock), also if  mutual exclusion is needed for a single instruction then its good to use,  #pragma omp atomic  
Following is the Hello World OpenMP program using C consisting of 4 threads.
#include<stdio.h>
#include<omp.h>

int main()
{
    omp_set_num_threads(4);

    #pragma omp parallel
    {
        printf("\nHello World from thread ID - %d from total_threads - %d\n", omp_get_thread_num(), omp_get_thread_num());
    }
}
For compiling the above code using GCC 4.2 or higher version following command need to be used.

gcc "your_prog_name.c" -fopenmp
The output of this program would be something like
 






 

Read More

Saturday, 20 July 2013

What is Parallel Programming? Why do you need to care about it ?

Posted by Mahesh Doijade

    
what is parallel programming and why you need to care about it?, parallel programming, parallel computing,
        Parallel Programming involves solving a particular problem by dividing it into multiple independent sub-problems, creating multiple execution entities for them thereafter solving them independently through these execution entities onto different processors and establishing communication amongst them for any coordination if required in order accomplish the given problem. So, it is the mechanism for creating programs which enables one to effectively utilize the available computational resources by executing the code simultaneously on several computational nodes. Though there are numerous parallel programming languages available but the dominant ones  are MPI for distributed memory programming and OpenMP for shared memory programming.
           Parallel programming is historical considered to be one of the difficult areas hackers can tackle. As it comes with several issues such as race conditions, deadlock, non-determinism, scaling limits due to inherent sequential parts in the code, to name a few. If you really require good performance in your application then only you need to care about parallel programming, because performance is the crucial factor for going to parallel programming. If you are not concerned with performance then just write your sequential code and be happy, as it will perhaps be much easier and can be get done relatively faster.

              Some of the reasons for which one need to go for parallel programming are :
  • Firstly, as the commodity processors have completely adopted parallel architecture all the processors nowadays are coming multiple computing cores in a single processor, so this means that writing sequential code and waiting for couple of years for CPUs to get faster is not an option now. Thereby, it can be inferred that Parallel Programming have come to mainstream, so in order to make optimal use of the available resources parallel programming is the way to go.
  • Also, efficient parallel programs will implicitly result into minimal time for execution and also will be cost effective.
  • As there are several limitations of sequential computing such as Power Wall, Limits to miniaturization of transistors, to just name a few.
  •  To solve very large problem such as Weather Forecasting, Medical Imaging, Financial and economic modelling, Bio-Informatics, to name a few.., parallel computing is the only way.
Read More

Sunday, 14 July 2013

Nested Parallelism OpenMP example

Posted by Mahesh Doijade
nested parallelism openmp, openmp, openmp parallel

       Nested Parallelism was introduced in OpenMP since OpenMP 2.5. Nested Parallelism enables the programmer to create parallel region within a parallel region itself. So, each thread in the outer parallel region can spawn more number of threads when it encounters the another parallel region. Nested parallelism can be put into effect at runtime by setting various environment variables prior to execution of the program.  It can be used by either setting the environment variable OMP_NESTED as TRUE or the other way is to call the function omp_set_nested() and passing it parameter '1' to enable nested parallelism or '0' to disable nested parallelism. The nested parallelism with OpenMP basic example is given below, compile this code using g++ with -fopenmp flag

#include <iostream>
#include <stdio.h>
#include <omp.h>

using namespace std;

int main()
{
    int number;
    int i;
    omp_set_num_threads(4);
    omp_set_nested(1); // 1 - enables nested parallelism; 0 - disables nested parallelism.

    #pragma omp parallel // parallel region begins
    {       
        printf("outer parallel region Thread ID == %d\n", omp_get_thread_num());
        /*
            Code for work to be done by outer parallel region threads over here.
        */
        #pragma omp parallel num_threads(2) // nested parallel region
        {   
            /*
                Code for work to be done by inner parallel region threads over here.
            */       
            printf("inner parallel region thread id %d\n", omp_get_thread_num());
           
            #pragma omp for
            for(i=0;i<20;i++)
            {
                // Some independent iterative computation to be done.
            }
        }
    }
    return 0;
}

Read More