Android is Linux base operating system and one of one of the key principles of Linux is the separation between processes. So what happens when you want to cross the boundaries.  

Services

In Android a service is an application component that can run long running operation without providing  UI. Services are used to do long running tasks like retrieving data from remote servers or retrieving large-size data from memory. Services -usually- run in the same process but never on the same thread as UI as executing long running tasks on UI thread would lead to blocking the UI responsiveness to user's actions and “application not responding” dialogs that will most certainly lead to bad user experience and app uninstall along the way.

A service starts in the application process as all other application components by default so what happen when a developer wants to expose  application's services so they can be used by other applications. 

There are multiple approaches to calling a remote service we can consider using broadcast&receivers and AIDL (Android Interface Definition Language).

Broadcast&Receivers

A Broadcast receiver is an application component that listen for system events as well as application events. By definition a broadcast is transferring a message to all recipients simultaneously (one-to-all). Using a Broadcast receivers for communication with a remote service there is a couple of things needs to be taken into  consideration:

1- the maximum size of the ”Bundle” message in the Intents used to send broadcast.
If the arguments or the return value are too large to fit in the transaction buffer, then the transaction will fail and throw TransactionTooLargeException. Generally its prefered to keep message size under 1MB as it is the maximum if transaction buffer till now.

2- A broadcast is transmitted across the system and that could introduce a security threat.
Other apps can listen to broadcasts and use it for any other purpose. As a rule of thumb any sensitive data should not be broadcasted.

AIDL

AIDL is allows developers to expose their services to other application by means of defining of programming interface that both the client and service agree upon in order to communicate with each other. AIDL achieves IPC by marshaling(Marshaling is the process of transforming the memory representation of an object to a data format suitable for storage or transmission. A note worth taking is that marshaling parameters is expensive) the objects. The programming interface contains the methods that other processes should use to communicate with this service. Methods accept parameters and return results in the following  data types:

1. All primitive types in the Java programming language (such as int, long, ,.....).
2. String.
3. CharSequence.
4. List (with a restriction).
5. Map (with a restriction).


The restriction on Map and List is that all elements in them must be one of the supported data types or one of the other AIDL-generated interfaces or declared Parcelables. 

The .aidl file must be copied to other applications in order for them to communicate with the service remotely so when any change is made in AIDL interface after the is service release must keep  backward compatiblity in order to avoid breaking other applications that are already using your service.

A hint mentioned in the Android API guide tells us to be aware that calls to an AIDL interface are direct function calls.  And no assumptions should be made about the thread in which the call occurs. A pure AIDL interface sends simultaneous requests to the service, which must then handle multi-threading.

AIDL vs Broadcast&Receivers

AIDL does IPC through marshaling, executes call simultaneously and require writing thread-safe code on the other hand we have got broadcast, an intent based communication with limited size message imposing security threat on sensitive information. 

Automated Job Recommendations

January 17th 2016, 4:09 amCategory: Big Data 0 comments

 

   One of the most important foundations to companies to properly grow is to choose the perfect employees that fit their needs. Not only the technical skills but also their culture that fits their aspects. On the other side, choosing the most appropriate job for job-seekers is very important to advance their career and quality of life.

 

   Recruitment process has become increasingly difficult, choosing the right employee among plenty of candidates for each job, each having different skills, cultures and ambitions.


   Recommender system technology aims to help users find items that match their personal interests. So we can use this technology to solve the recruitment problem for both sides; companies, to find appropriate candidates, and job-seekers, to find favorable positions. So let's talk about what can science offer to solve this bidirectional problem.

Information

   In the world of data science, the more information we can get, the more accurate results we may have. So let’s start with available information we can collect about job-seekers and jobs.

Job Seeker

  • Personal information, such as language, social situation and location.
  • Information about current and past professional positions held by the candidate. This section may contain companies names, positions, companies descriptions, job start dates, and job finish dates. The company description field may further contain information about the company (for example the number of employees and industry).
  • Information about the educational background, such as university, degrees, fields of education, start and finish dates.
  • IT skills, awards and publications.
  • Relocation ability.
  • Activities (like, share, short list)

Job:

  • Required skills.
  • Nice to have skills.
  • Preferred location (onsite, work from home).
  • Company preferences.
 

Information extraction

   To get all this information we may face another big challenge. Most of this information may have been included in a plain text (ex. resume, job post description, etc.). So, we need to apply some knowledge extraction techniques on those texts, so we can get a complete view about requirements and skills.

 

Informations enrichment

   A good matching technique requires more than just looking into explicit information only. For example, a job post that is defined to be looking for a candidate who has a knowledge about Java programming language while on the other side a candidate who has claimed knowledge with Spring framework, so if we are just looking for a candidate with explicit defined Java skill then this candidate will not be shown in the view, although he had an implicit Java skill by using Spring framework. To solve this problem we need to enrich both the job and candidate information by using a knowledge base that can link these two skills or at least knows that using Spring framework implicitly imply a Java skill. This will improve the accuracy by looking into the meanings and concepts instead of the explicit information only.

 

Guidelines

Let’s define some guidelines we need to take care of when working on the matching.

  • Matching of individuals to job depends on skills and abilities that individuals should have.
  • Recommending people is a bidirectional process, it should take into account the preferences of both recruiter and candidate.
  • Recommendations should be based on the candidate’s attributes, as well as the relational aspects that determine the fit between the person and the team members/company with whom the person will collaborating (fit candidate to company not only the job).
  • Must distinguish between must-have and nice-to-have requirements and improve their contribution with dynamic weights.
  • Use ontology to categorize jobs as a knowledge base.
  • Enrich job-seeker and jobs profiles with knowledge base (knowing Cakephp framework implies knowing also PHP).
  • Data normalization to avoid domination.
  • Learning from the others job transitions.
 

Recommendation Techniques

Let’s list some techniques used in recommendation fields, no technique is suitable for all cases, you need first to link it with type of data you have and your whole case.

  • Collaborative filtering
    • In this technique, we are looking for a similar behavior between job-seekers, so we can find job-seekers who have similar interests, and make job recommendations from their jobs of interest.
  • Content-based filtering
    • In this technique we are looking for profile’s content for both: the job-seeker and the job post, and get the best matching between them, regardless of the behavior of the job-seeker and the company that posted the job.
  • Hybrid
    • Weighted In which, the score of item recommendation is calculated from the results of all of used recommendation techniques that are available in the system.
    • Switching The system uses some criteria to switch between recommendation techniques.
    • Mixed In which large number of recommendations are applied simultaneously, so we can mix the results from both recommenders.
    • Feature Combination uses the collaborative information as additional feature data for each item and use content-based techniques over this improved data set
    • Cascade It comprises a staged process. In this technique, one recommendation technique is used first to produce a rough ranking of candidates and a second technique refines the recommendation.
  • 3A Ranking algorithm maps (job, company and job-seeker) to a graph with relations between them (apply, favorite, post, like, similar, match, visit, … etc), then depends on relations and ranking to recommend items.
    • Content base is used to calculate similarity between jobs, job-seekers and companies, and each of them with the other one (match profile between job and job-seeker).
 

General recommendation system Architecture



Figure 1 - General System architecture.

Evaluation

   To create a self improved system you need to get feedback for the results you produced to correct yourself over time. The best feedback you can get is the feedback from the real world, so we can depend on job-seekers and companies feedback to adjust the results as desired.

  • Explicit: Ask users to rate the recommendations (jobs / candidates)
  • Implicit: Track interaction on recommendations (applied, accepted, short list and ignored)
 

Further Reading

  • Proceedings of the 22nd International Conference on World Wide Web. Yao Lu, Sandy El Helou, Denis Gillet (2013). A Recommender System for Job Seeking and Recruiting Website.
  • JOURNAL OF COMPUTERS, VOL. 8. Wenxing Hong, Siting Zheng, Huan Wang (2013). A Job Recommender System Based on User Clustering.
  • International Journal of the Physical Sciences Vol 7(29). Shaha T. Al-Otaibi, Mourad Ykhlef (July 2012). A survey of job recommender systems.
  • Proceedings of the fifth ACM conference on Recommender systems. Berkant Cambazoglu, Aristides Gionis (2011). Machine learning job recommendation.

It's our pleasure to highligh the initiative taken by our data team leader Ahmed Mahran to effectively contribute to the Spark Time Series project, created by Sandy Ryza, a senior data scientist at Cloudera, the leading big data solutions provider.

 

Time Series data has gained an increasing attention in the past few years. To quote Sandy Ryza:

 

Time-series analysis is becoming mainstream across multiple data-rich industries. The new Spark-TS library helps analysts and data scientists focus on business questions, not on building their own algorithms.

 

Find the full story here, where he introduces SparkTS, and accredits our contributor.

 

We are, forever, indebted to the open source community, it enabled us to create wonderful feats. It's our deep belief that we should give back to the community in order to guarantee its health and sustainability. We are proud that we effectively contributed to such great project and we are looking forward to more.

Retrofit 2.0.0-beta1

September 6th 2015, 10:01 amCategory: Mobile 0 comments

Retrofit is one of the famous REST Client java libraries which we can use in android application and java desktop applications. 
We can combine Retrofit with Gson library to deal with any REST API.

Here we will show how can we use Retrofit 2.0.0 with GSON library by showing a simple example on Android to get user information and list of his repos from Github API.
 
At first you should setup Retrofit, Gson and GsonConverter by adding the following libraries to dependecies in gradle file:

dependencies {
 compile 'com.squareup.retrofit:retrofit:2.0.0-beta1'
 compile 'com.squareup.retrofit:converter-gson:2.0.0-beta1'
 compile 'com.google.code.gson:gson:2.2.4'
}
Then we can build the POJO(Plain Old Java Object) which we will use to cast the response from the request by using http://www.jsonschema2pojo.org/. That is a very useful online tool, it convert the josn object to Java classes which use Gson library like the follwing images: 


Create a new class GitHubUser in your code, we will use on the next lines.  

Create a GithubService Interface: 
public interface GitHubService { @GET("/users/{user}/repos") Call<List<Repo>> listRepos(@Path("user") String user); }

Now we can use Retrofit by injecting the following code in any android service which extends IntentService the following code: 
    Retrofit retrofit = new Retrofit.Builder().
                    baseUrl("https://api.github.com").
                    addConverterFactory(GsonConverterFactory.create()).
                    build();

    GitHubService gitHubService = retrofit.create(GitHubService.class);

    Call<GitHubUser> gitHubUserCall = gitHubService.getUserInfo("octocat");
    try {
        Response<GitHubUser> response = gitHubUserCall.execute();
        if (response.isSuccess()) {
            GitHubUser user = response.body();
            Log.d("User Email:", user.getEmail());
            Log.d("User Avatar URL:", user.getAvatarUrl());
        }
    } catch (IOException e) {
        e.printStackTrace();
    }






 

   
    In search space, pagination always has to happen. Solr has the feature of basic paging. In basic paging, you simply specify start and rows parameters. start indicates where the returned results should start and rows specifies how many documents are returned. With basic paging, partial index exporting and migration is a problem. Since basic paging needs to sort all the results first before returning the desired subset, it needs large amount of memory if start is of high order. For instance, start=1000000 and rows=10 causes an inefficient memory allocation to happen due to the sorting of 1000010 documents. In a distributed environment, the case is worse because the engine has to fetch 1 million documents from each shard, sort them then return the result set.