라즈베리파이를 이용한 아마존 에코 서비스를 만들기 위한 오프소스 프로젝트가 진행되고 있다.

아래의 링크를 방문하면 자세한 소스 코드 및 설명이 소개되어 있다.

https://github.com/amzn/alexa-avs-raspberry-pi

 

Project: Raspberry Pi + Alexa Voice Service

About the Project

This project demonstrates how to access and test the Alexa Voice Service using a Java client (running on a Raspberry Pi), and a Node.js server. You will be using the Node.js server to get a Login with Amazon authorization code by visiting a website using your computer's (Raspberry Pi in this case) web browser.

This guide provides step-by-step instructions for obtaining the sample code, the dependencies, and the hardware you need to get the reference implementation running on your Pi. For Windows, Mac, or generic linux instructions, see this guide.


Getting Started

Hardware you need

  1. Raspberry Pi 2 (Model B) - Buy at Amazon. UPDATE: Even though this guide was built using a Raspberry Pi 2, it should work just fine with a Raspberry Pi 3 as well. Pi 1 users - please see this thread for help.
  2. Micro-USB power cable for Raspberry Pi (included with Raspberry Pi)
  3. Micro SD Card - To get started with Raspberry Pi you need an operating system. NOOBS (New Out Of the Box Software) is an easy-to-use operating system install manager for the Raspberry Pi. The simplest way to get NOOBS is to buy an SD card with NOOBS preinstalled - Raspberry Pi 8GB Preloaded (NOOBS) Micro SD Card
  4. An Ethernet cable
  5. USB 2.0 Mini Microphone - Raspberry Pi does not have a built-in microphone; to interact with Alexa you'll need an external one to plug in - Buy at Amazon
  6. External Speaker with 3.5mm audio socket/stereo headset jack - Buy at Amazon
  7. A USB Keyboard & Mouse, and an external HDMI Monitor - we also recommend having a USB keyboard and mouse as well as an HDMI monitor handy if for some reason you can’t “SSH” into your Raspberry Pi. More on “SSH” later.
  8. WiFi Wireless Adapter (Optional) Buy at Amazon

Skills you need

  1. Basic programming experience
  2. Familiarity with shell

 

반응형

'IT > IoT' 카테고리의 다른 글

라즈베리파이 OS설치부터 기본 설정하기  (0) 2016.05.11
블로그 이미지

조이풀 라이프

Lift is short, enjoy the life

,

아마존 AWS 시작하기

IT/AWS 2016. 5. 10. 11:11

페이스북에서 AWS를 처음 사용하는 사용자를 위한 좋은 자료를 링크하여 공유합ㄴ디ㅏ.

 

[AWS EC2 인스턴스 생성하기]
http://wildpup.cafe24.com/archives/696

[AWS EC2에서 IAM Role 사용하기]
http://wildpup.cafe24.com/archives/673

[EC2 Security Group]
http://wildpup.cafe24.com/archives/720

[AWS RDS와 인스턴스 생성]
http://wildpup.cafe24.com/archives/734

[AWS RDS 스냅샷 이용하기]
http://wildpup.cafe24.com/archives/754

[AWS RDS의 자동 백업 기능 사용하기]
http://wildpup.cafe24.com/archives/767

[AWS RDS의 Read Replica 생성]
http://wildpup.cafe24.com/archives/775

[AWS S3의 소개와 간단한 사용]
http://wildpup.cafe24.com/archives/785

[S3 버킷과 객체의 권한을 설정하여 웹에 공개하기]
http://wildpup.cafe24.com/archives/804

[AWS S3의 버저닝(Versioning) 기능 사용하기]
http://wildpup.cafe24.com/archives/821

[AWS S3와 CloudFront 사용하기]
http://wildpup.cafe24.com/archives/830

[AWS CloudWatch와 Alarm 생성]
http://wildpup.cafe24.com/archives/847

[AWS ELB를 이용한 요청분배]
http://wildpup.cafe24.com/archives/867

[AWS Auto Scaling을 이용하여 EC2 인스턴스를 자동으로 확장하기]
http://wildpup.cafe24.com/archives/890

[AWS Access Key와 Secret Key를 만들고 CLI 사용해보기]
http://wildpup.cafe24.com/archives/929

[AWS SQS 간단히 사용해 보기]
http://wildpup.cafe24.com/archives/945

[AWS CloudFormation을 이용하여 서버 구성하기]
http://wildpup.cafe24.com/archives/971

[AWS SES의 소개와 간단한 사용법]
http://wildpup.cafe24.com/archives/1003

[AWS ElastiCache의 소개와 간단한 사용(Memcached)]
반응형
블로그 이미지

조이풀 라이프

Lift is short, enjoy the life

,

AWS EC2 인스턴스 다른 Region 에 복제/이전

이전 처리 개념


  1. 원본 Region
    1. 실행중인 EC2 instance를 복제
    2. 복제한 instance가 실행되면 중지상태로
    3. 해당 instance의 Volume의 Snapshot 생성
    4. Snapshot 복사메뉴에서 region 선택

  2. 이전받을 Region
    1. 원본에서 복제되어온 Snapshot 으로부터 Volume 생성
    2. 원본 EC2 Instance와 같은 사양으로 Instance 생성 후 중지상태로
    3. Volume 떼어내고 1.의 Volume 붙이기
    4. Instance Start
    5. Instance Running 되면 Public DNS 주소로 접속하여 검증
반응형
블로그 이미지

조이풀 라이프

Lift is short, enjoy the life

,

 

AWS를 이용하기 위해서 AWS Instance를 생성하시고 SSH로 접속하여 아래 스크립트를 따라서 설치하면 됩니다.

[AWS 인스턴스 만들기]

http://yfkwon.tistory.com/5


[윈도우에서 Putty를 이용한 SSH 접속하기]

http://yfkwon.tistory.com/3


[TensorFlow 설치하기]

http://erikbern.com/2015/11/12/installing-tensorflow-on-aws/

install-tensorflow.sh                       

  # Note – this is not a bash script (some of the steps require reboot)
  # I named it .sh just so Github does correct syntax highlighting.
  #
  # This is also available as an AMI in us-east-1 (virginia): ami-cf5028a5
  #
  # The CUDA part is mostly based on this excellent blog post:
  # http://tleyden.github.io/blog/2014/10/25/cuda-6-dot-5-on-aws-gpu-instance-running-ubuntu-14-dot-04/
   
  # Install various packages
  sudo apt-get update
  sudo apt-get upgrade -y # choose “install package maintainers version”
  sudo apt-get install -y build-essential python-pip python-dev git python-numpy swig python-dev default-jdk zip zlib1g-dev
   
  # Blacklist Noveau which has some kind of conflict with the nvidia driver
  echo -e "blacklist nouveau\nblacklist lbm-nouveau\noptions nouveau modeset=0\nalias nouveau off\nalias lbm-nouveau off\n" | sudo tee /etc/modprobe.d/blacklist-nouveau.conf
  echo options nouveau modeset=0 | sudo tee -a /etc/modprobe.d/nouveau-kms.conf
  sudo update-initramfs -u
  sudo reboot # Reboot (annoying you have to do this in 2015!)
   
  # Some other annoying thing we have to do
  sudo apt-get install -y linux-image-extra-virtual
  sudo reboot # Not sure why this is needed
   
  # Install latest Linux headers
  sudo apt-get install -y linux-source linux-headers-`uname -r`
   
  # Install CUDA 7.0 (note – don't use any other version)
  wget http://developer.download.nvidia.com/compute/cuda/7_0/Prod/local_installers/cuda_7.0.28_linux.run
  chmod +x cuda_7.0.28_linux.run
  ./cuda_7.0.28_linux.run -extract=`pwd`/nvidia_installers
  cd nvidia_installers
  sudo ./NVIDIA-Linux-x86_64-346.46.run
  sudo modprobe nvidia
  sudo ./cuda-linux64-rel-7.0.28-19326674.run
  cd
   
  # Install CUDNN 6.5 (note – don't use any other version)
  # YOU NEED TO SCP THIS ONE FROM SOMEWHERE ELSE – it's not available online.
  # You need to register and get approved to get a download link. Very annoying.
  tar -xzf cudnn-6.5-linux-x64-v2.tgz
  sudo cp cudnn-6.5-linux-x64-v2/libcudnn* /usr/local/cuda/lib64
  sudo cp cudnn-6.5-linux-x64-v2/cudnn.h /usr/local/cuda/include/
   
  # At this point the root mount is getting a bit full
  # I had a lot of issues where the disk would fill up and then Bazel would end up in this weird state complaining about random things
  # Make sure you don't run out of disk space when building Tensorflow!
  sudo mkdir /mnt/tmp
  sudo chmod 777 /mnt/tmp
  sudo rm -rf /tmp
  sudo ln -s /mnt/tmp /tmp
  # Note that /mnt is not saved when building an AMI, so don't put anything crucial on it
   
  # Install Bazel
  cd /mnt/tmp
  git clone https://github.com/bazelbuild/bazel.git
  cd bazel
  git checkout tags/0.1.0
  ./compile.sh
  sudo cp output/bazel /usr/bin
   
  # Install TensorFlow
  cd /mnt/tmp
  export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64"
  export CUDA_HOME=/usr/local/cuda
  git clone --recurse-submodules https://github.com/tensorflow/tensorflow
  cd tensorflow
  # Patch to support older K520 devices on AWS
  # wget "https://gist.githubusercontent.com/infojunkie/cb6d1a4e8bf674c6e38e/raw/5e01e5b2b1f7afd3def83810f8373fbcf6e47e02/cuda_30.patch"
  # git apply cuda_30.patch
  # According to https://github.com/tensorflow/tensorflow/issues/25#issuecomment-156234658 this patch is no longer needed
  # Instead, you need to run ./configure like below (not tested yet)
  TF_UNOFFICIAL_SETTING=1 ./configure
  bazel build -c opt --config=cuda //tensorflow/cc:tutorials_example_trainer
   
  # Build Python package
  # Note: you have to specify --config=cuda here - this is not mentioned in the official docs
  # https://github.com/tensorflow/tensorflow/issues/25#issuecomment-156173717
  bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
  bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
  sudo pip install /tmp/tensorflow_pkg/tensorflow-0.5.0-cp27-none-linux_x86_64.whl
   
  # Test it!
  cd tensorflow/models/image/cifar10/
  python cifar10_multi_gpu_train.py
   
  # On a g2.2xlarge: step 100, loss = 4.50 (325.2 examples/sec; 0.394 sec/batch)
  # On a g2.8xlarge: step 100, loss = 4.49 (337.9 examples/sec; 0.379 sec/batch)
  # doesn't seem like it is able to use the 4 GPU cards unfortunately :(
반응형
블로그 이미지

조이풀 라이프

Lift is short, enjoy the life

,