Asynchronous I/O Jobs

4 were found based on your criteria

  • Fixed-Price – Est. Budget: $2,500.00 Posted
    I am seeking an Android Mobile App developer with experience and developing apps with local data cache and synchronization with Web Services APIs. I need a creative, smart and fast developer with detail knowledge on Android OS development. Must have the following skills: • Extensive development knowledge on PhoneGap and/or Titanium • Detail experience using local cache data stores (SQLite or other) on the mobile device • Detail knowledge in RESTful API calls • Experience doing detail documentation • Collaborative and fast pace approach ...
  • Fixed-Price – Est. Budget: $100.00 Posted
    Need c++ code that will download multiple web pages at the same time / asynchronously. Need boost::asio or libcurl used for the networking library. This code will be used on a linux operating system so it needs to be developed with that in mind. Any version of C++ can be used. However, 11 is preferred. Specific code requirements: - Use either boost::asio or libcurl - Save downloaded files with random names and to specific directory - Clean, efficient code - Linux compatible code
  • Hourly – Less than 1 month – 30+ hrs/week – Posted
    - Stage 1: (need to be done ASAP in less than a month) - use OAuth2 to integrate our current web application written in ASP.Net MVC 3 with 3rd party system to authorize users' access. - send/receive data over HTTPS either at front-end with jquery or backend with C#. Data is in XML and Json format. Handle asynch requests/responses. Build for performance (high load). - handle unstable network connection well. - handle session timeout, re-login to 3rd party system smoothly with OAuth ...
  • Hourly – Less than 1 week – 30+ hrs/week – Posted
    If you're knowledgeable in both Matlab and Python, then read on! I have a Matlab script comprising various functions (running locally on my laptop) that I'd like to scale up, locate remotely on a server and automate. The script communicates via API with a dropbox storing both zipped HTML files and data in JSON format, and via JDBC to a MySQL database instance (RDS) containing indexes, html & domain information and filename/location information for those JSONs. My idea ...