Are you a developer? Sign up for workshops to learn how to help build smarter cities in Nepal

24 Apr 2018 - codefornepal

A view from Nagarkot, NepalAsia-Pacific Peace and Development Service Alliance (APPDSA), Global Peace Foundation, Global Young Leaders Academy are going to host a forum on “Smart and Green Cities Equipping Youth Social Enterprise to Achieve Sustainable Development Goals” at Bougainvilla Events, Tripureshwor (Nearby Avenues TV Plaza), on May 14 and 15, 2018. 

At the event, in partnership with Code for Nepal, a group of international technical experts from IBM will deliver their extensive training for free of costs to put a light on what ICT means and how it can be utilized in making of smart and green city. The workshop is supported by Artificial Intelligence for Development (AID) and AI Developers Nepal (AIDevNepal) as a Knowledge Partners. During the workshop participants will learn how to maximize their coding skills and get a hands-on technical training form the experts. The participants will get equipped with blended learning formats and broaden their horizons by introducing IT into their current profession or learning environment.

Read more about the workshops below and at the end of this page sign up for the free workshops. By May 10, we will let you know if you have been selected to attend these workshops.

Who should attend these workshops: Developers

Features Training Topics:
– IBM Cloud
– Quantum Computing
– Artificial Intelligence
– Blockchain
– Internet of Things (IoT)
– Watson Visual Recognition

WORKSHOP – THEME RELATED TO DISASTER RECOVERY

Code Pattern – 1: Analyze an image and send status alert

**Brief about pattern How this will impact the society**

Industrial and high-tech maintenance companies often photograph their sites for potential hazards or emergencies and then inform the appropriate person who can take action to resolve the issue. A leak, fire, or malfunction can spell disaster for a company, resulting in dangerous situations for employees, downtime, public relation setbacks, and financial losses.

These companies have been leaders in using remote devices–phones, mounted cams, drones–to send images of various sites and equipment to be monitored for any malfunctions. But what if you could automatically analyze those images and send an alert about the location or potential emergency situation?

If you’re a developer working for a company that relies on-site images, you can now build an application that analyzes an image and sends an alert automatically. In this code pattern, you’ll use IBM Cloud Functions to analyze an image and send it to the Watson IoT Platform. The image will be assigned a score, and that score will be evaluated to trigger any necessary alerts to reach authorities through the best available communication channel (for example, email, text, or push notifications).

You have the option to develop a standalone application that can be easily updated or modified to work from within a smart device, or run it on a browser on your laptop or phone.In the pattern use case, you’ll learn how to send an image for processing that detects a fire. (You can also use this same app for maintenance alerts or other emergency alert detections.) The fire is identified by the Watson Visual Recognition service, and the Node-RED app will subsequently notify the appropriate resources.

There are multiple ways to design this process, and you can modify the pattern to extend it to other real-world use cases, sending alerts to other designated recipients and creating additional designated channels for alert notifications.

You’ll create an app based on the following flow:

  • The application takes an image from a device or uploads it from a local image folder to an IBM Cloudant NoSQL database.
  • The Cloudant database, in turn, receives the binary data and triggers an action on IBM Cloud Functions.
  • IBM Cloud Functions Composer performs the Visual Recognition analysis and receives a response in JSON format.
  • The response is sent to the IoT Platform and registers itself as a device receiving the analyzed image.
  • A Node-RED flow continues to read these events from the device on the IoT Platform and triggers alerts based on the image’s features.
  • More details on Code Pattern here: https://developer.ibm.com/code/patterns/analyze-an-image-and-send-a-status-alert/

    Expected Outcome: What developers will learn

    After completing workshop on this Code Pattern, developers will learn how to

    1. Upload an image to IBM Cloudant NoSQL database from local device image folder
    2. Trigger an action on IBM Cloud Functions (Serverless)
    3. Register a device and send data to IoTF Platform
    4. Analyze image using Watson Visual Recognition

    Pre-reqs for developers:

     

    Code Pattern – 2: Deploy a Core ML model with Watson Visual Recognition

    **Brief about pattern How this will impact the society**

    Imagine that you’re a technician for an aircraft company and you want to identify one of the thousands of parts in front of you. Perhaps you don’t even have internet connectivity__. (__Most of the times during disaster recovery we won’t have access to Internet connectivity__) So how to do it? Where do you start? If only there was an app for that. Well, now you can build one!

    Most visual recognition offerings rely on API calls to be made to a server over HTTP. With Core ML, you can deploy a trained model with your app. Using Watson Visual Recognition, you can train a model without any code; simply upload your images with the Watson Studio tool and deploy a trained Core ML model to your iOS application.

    In this code pattern, you’ll train a custom model. With just a few clicks, you can test and export that model to be used in your iOS application. The pattern includes an example dataset to help you build an application that can detect different types of cables (that is, HDMI and USB), but you can also use your own data.

    More details on Code Pattern here: https://developer.ibm.com/code/patterns/deploy-a-core-ml-model-with-watson-visual-recognition/

    Expected Outcome: What developers will learn

    When you have completed this code pattern, you should know how to:

  • Create a dataset with Watson Studio
  • Train a Watson Visual Recognition classifier based on the dataset
  • Deploy the classifier as a Core ML model to an iOS application
  • Use the Watson Swift SDK to download, manage, and execute the trained model
  • Pre-reqs for developers:

     

    The workshop is the part of Asia Pacific Peace and Development Service Alliance (APPDSA) 2 Day Forum organized by Global Peace Foundation Nepal. Please find the more information of the whole event from here.