Course Code: cogito
Duration: 21 hours
Prerequisites:

Familiarity with AI concepts and Language Processing

Overview:

Effective data and information management is a foundation for competitive advantage for enterprises in any industry. An information-infused world has added another essential component for differentiation: cognitive technology. Cogito combines the advantages of semantic technology with machine learning to provide an efficient solution.

After completing this course, delegates will:

  • understand Cogito structure and deployment mechanisms
  • be able to carry out installation / production environment / architecture tasks and configuration
  • be able to assess  quality, perform debugging, monitoring
  • be able to implement advanced production like training models, embedding terms, building graphs and logging
Course Outline:

Getting Started

  • Setup and Installation

Cogito Basics

  • Creation, Initializing, Saving, and Restoring 
  • Feeding, Reading and Preloading Cogito Data
  • How to use Cogito infrastructure to work at scale
  • Visualizing and Evaluating

Cogito Mechanics 101

  • Prepare the Data
    • Download
    • Inputs and Placeholders
  • Build the Graph
    • Inference
    • Loss
    • Training
  • Train the Model
    • The Graph
    • The Session
    • Train Loop
  • Evaluate the Model
    • Build the Eval Graph
    • Eval Output

Advanced Usage

  • Threading and Queues
  • Distributed Cogito
  • Writing Documentation and Sharing your Model
  • Customizing Data Readers
  • Using GPUs
  • Manipulating Cogito  Files

Cogito Serving

  • Introduction
  • Basic Serving Tutorial
  • Advanced Serving Tutorial
  • Serving Inception Model Tutorial

Getting Started with the Syntax

  • Parsing from Standard Input
  • Annotating a Corpus
  • Configuring the  Scripts

Building an NLP Pipeline Including Cogito

  • Obtaining Data
  • Part-of-Speech Tagging
  • Training the Tagger
  • Preprocessing with the Tagger
  • Dependency Parsing: Transition-Based Parsing
  • Training a Parser Step 1: Local Pretraining
  • Training a Parser Step 2: Global Training

Representations of Words

  • Motivation: Why Learn word embeddings?
  • Scaling up with Noise-Contrastive Training
  • The Skip-gram Model
  • Syntax based modelling and representation
  • Building the Graph
  • Training the Model
  • Visualizing the Learned Embeddings
  • Evaluating Embeddings: Analogical Reasoning
  • Optimizing the Implementation