Cloud

 

Introducing Transfer Appliance in the EU for cloud data migrationIntroducing Transfer Appliance in the EU for cloud data migrationProduct Manager

You can request a Transfer Appliance directly from your GCP console. The service will be available in beta in the EU in a 100TB configuration with total usable capacity of 200TB. And it’ll soon be available in a 480TB configuration with a total usable capacity of a petabyte.

Moving HDFS clusters with Transfer Appliance  

Customers have been using Transfer Appliance to move everything from audio and satellite imagery archives to geographic and wind data. One popular use case is migrating Hadoop Distributed File System (HDFS) clusters to GCP.

We see lots of users run their powerful Apache Spark and Apache Hadoop clusters on GCP with Cloud Dataproc, a managed Spark and Hadoop service that allows you to create clusters quickly, then hand off cluster management to the service. Transfer Appliance is an easy way to migrate petabytes of data from on-premise HDFS clusters to GCP.

Earlier this year, we announced the ability to configure Transfer Appliance with one or more NFS volumes. This lets you push HDFS data to Transfer Appliance using Apache DistCp (also known as Distributed Copy)—an open source tool commonly used for intra/inter-cluster data copy. To copy HDFS data onto a Transfer Appliance, configure it with an NFS volume and mount it from the HDFS cluster. Then run DistCp with the mount point as the copy target. Once your data is copied to Transfer Appliance, ship it to us and we’ll load your data into Cloud Storage.

If you are interested in migrating your HDFS data to GCP, request an appliance here. Learn more about the benefits of moving HDFS to GCP here.

Using Transfer Appliance in production

EU customers such as Candour Creative, which helps their clients tell stories through films and photographs, wanted to take advantage of having their content readily available in the cloud. But Zac Crawley, Director at Candour, was facing some challenges with the move.

“Multiple physical backups of our data were taking up space and becoming costly,” Crawley says. “But when we looked at our network, we figured it would take a matter of months to move the 40TBs of large file data. Transfer Appliance reduced that time significantly.”

https://cloud.google.com/blog/products/storage-data-transfer/introducing-transfer-appliance-in-the-eu-for-cloud-data-migration/

Posted in Google Cloud | Tagged , | Comments Off on Introducing Transfer Appliance in the EU for cloud data migrationIntroducing Transfer Appliance in the EU for cloud data migrationProduct Manager

Choosing your cloud app migration orderChoosing your cloud app migration orderCloud migration teamProduct Manager

{{unknown}}

Comments Off on

Bringing enterprise network security controls to your Kubernetes clusters on GKEBringing enterprise network security controls to your Kubernetes clusters on GKEProduct Manager, Google Cloud

At Google Cloud, we work hard to give you the controls you need to tailor your network and security configurations to your organization’s needs. Today, we’re excited to announce the general availability of a few important networking features for Google Kubernetes Engine (GKE) that provide additional security and privacy for your container infrastructure: private clusters, master authorized networks, and Shared Virtual Private Cloud (VPC).

These new features enable you to limit access to your Kubernetes clusters from the public internet, confining them within the secure perimeter of your VPC, and to share common resources across your organization without compromising on isolation. Specifically:

  • Private clusters let you deploy GKE clusters privately as part of your VPC, restricting access to within the secure boundaries of your VPC.

  • Master authorized networks block access to your clusters’ master API endpoint from the internet, limiting access to a set of IP addresses you control.

  • Shared VPC eases cluster maintenance and operation by separating responsibilities: it gives centralized control of critical network resources to network or security admins, and clusters responsibilities to project admins.

Credit Karma, a personal finance company that keeps track of its users’ credit scores, has been eagerly testing out these advanced GKE networking capabilities, especially as they work to meet compliance requirements such as PCI-DSS (Payment Card Industry Data Security Standard).

“GKE gives us the features we need to move faster. The private cluster capability enables us to meet strict security and compliance requirements without compromising on functionality. With private IPs and pod IP aliasing, we are able to communicate with other services in GCP while staying within Google’s private network.” – Kevin Jones, Staff engineer, Credit Karma

Now that we’ve been introduced to the new features, let’s take a look at each one in more detail.

Enable more secured Kubernetes deployments

Private clusters on GKE use only private IP addresses for your workloads so that they’re only reachable from within your VPC making the communication between the master and the nodes completely private.

In order to access your GKE master for administrative purposes, you can connect privately to the Kubernetes master from your on-prem via VPN or private Interconnect.

You can also whitelist a set of internet public IPs that are allowed to access the master endpoint, blocking traffic from unauthorized IP sources, with master authorized networks.

The access to images in Google Container Registry, or to  Stackdriver to send logs is also done privately with Private Google Access without leaving Google’s network. To gain internet access from the private cluster for the nodes, you can either set up additional services, such as a NAT gateway, or use Google’s managed version, Cloud NAT.

Check out the documentation on how to create a private cluster to confine your workloads within the secure boundaries of your VPC.

Control access to critical network resources with Shared VPC

Shared VPC allows many different GKE cluster admins in an Organization to carry out their cluster management duties autonomously while communicating and sharing common resources securely.

For example, you can assign administrative responsibilities such as creating and managing a GKE cluster to project admins, while tasking security and network admin teams with the responsibility for critical network resources like subnets, routes, and firewalls. Learn how to create Kubernetes clusters in a Shared VPC model and set appropriate access controls for critical network resources.

In conclusion GKE provides the network and security centralized management for your enterprise deployments, and allows your sensitive workloads to remain secure and private within the boundaries of your VPC. Read more about how to holistically think about networking to apply to your GKE deployments.

https://cloud.google.com/blog/products/networking/bringing-enterprise-network-security-controls-to-your-kubernetes-clusters-on-gke/

Posted in Google Cloud | Tagged , | Comments Off on Bringing enterprise network security controls to your Kubernetes clusters on GKEBringing enterprise network security controls to your Kubernetes clusters on GKEProduct Manager, Google Cloud

Simplifying ML predictions with Google Cloud FunctionsSimplifying ML predictions with Google Cloud FunctionsDeveloper AdvocateDeveloper Advocates

To show off the power of Cloud ML Engine we built two versions of the model independently—one in Scikit-learn and one in TensorFlow—and built a web app to easily generate predictions from both versions. Because these models were built with entirely different frameworks and have different dependencies, it previously required a lot of code to build even a simple app that queried both models. Cloud ML Engine provides a centralized place for us to host multiple types of models, and streamlines the process of querying them.

And before we get into the details, you may be wondering why you’d need multiple versions of the same model. If you’ve got data scientists or ML engineers on your team, they may want to experiment independently with different model inputs and frameworks. Or, maybe they’ve built an initial prototype of a model and will then obtain additional training data and train a new version. A web app like the one we’ve built provides an easy way to compare output, or even load test across multiple versions.

For the frontend, we needed a way to make predictions directly from our web app. Because we wanted the demo to focus on Cloud ML Engine serving, and not on boilerplate details like authenticating our Cloud ML Engine API request, Cloud Functions was a great fit. The frontend consists of a single HTML page hosted on Cloud Storage. When a user enters a movie description in the web app and clicks “Get Prediction,” it invokes a cloud function using an HTTP trigger. The function sends the text to ML Engine, and parses the genres returned from the model to display them in the web UI.

Here’s an architecture diagram of how it all fits together:

https://cloud.google.com/blog/products/ai-machine-learning/simplifying-ml-predictions-with-google-cloud-functions/

Posted in Google Cloud | Tagged , | Comments Off on Simplifying ML predictions with Google Cloud FunctionsSimplifying ML predictions with Google Cloud FunctionsDeveloper AdvocateDeveloper Advocates

Cloud TPUs in Kubernetes Engine powering Minigo are now available in betaCloud TPUs in Kubernetes Engine powering Minigo are now available in betaProduct ManagerSoftware EngineerProduct Manager

For a more detailed explanation of Cloud TPUs in GKE, for example how to train the TensorFlow ResNet-50 model using Cloud TPU and GKE, check out the documentation.

640 Cloud TPUs in GKE powering Minigo

Internally, we use Cloud TPUs to run one of the most iconic Google machine learning workloads: Go. Specifically, we run Minigo, an open-source and independent implementation of Google DeepMind’s AlphaGo Zero algorithm, which was the first computer program to defeat a professional human Go player and world champion. Minigo was started by Googlers as a 20% project, written only from existing published papers after DeepMind retired AlphaGo.

Go is a strategy board game that was invented in China more than 2,500 years ago and that has fascinated humans ever since—and in recent years challenged computers. Players alternate placing stones on a grid of lines in an attempt to surround the most territory. The large number of choices available for each move and the very long horizon of their effects combine to make Go very difficult to analyze. Unlike chess or shogi, which have clear rules that determine when a game is finished (e.g., checkmate), a Go game is only over when both players agree. That’s a difficult problem for computers. It’s also very hard, even for skilled human players, to determine which player is winning or losing at a given point in the game.

Minigo plays a game of Go using a neural network, or a model, that answers two questions: “Which move is most likely to be played next?” called the policy, and “Which player is likely to win?” called the value. It uses the policy and value to search through the possible future states of the game and determine the best move to be played.

The neural network provides these answers using reinforcement learning which iteratively improves the model in a two-step process. First, the best network plays games against itself, recording the results of its search at each move. Second, the network is updated to better predict the results in step one. Then the updated model plays more games against itself, and the cycle repeats, with the self-play process producing new data for the training process to build better models, and so on ad infinitum.

https://cloud.google.com/blog/products/ai-machine-learning/cloud-tpus-in-kubernetes-engine-powering-minigo-are-now-available-in-beta/

Posted in Google Cloud | Tagged , | Comments Off on Cloud TPUs in Kubernetes Engine powering Minigo are now available in betaCloud TPUs in Kubernetes Engine powering Minigo are now available in betaProduct ManagerSoftware EngineerProduct Manager

Cloudflare DNS and CDN With WordPress High Availability On Google Cloud

Cloudflare DNS and CDN With WordPress High Availability On Google Cloud

Comments Off on

Access Google Cloud services, right from IntelliJ IDEA

Posted in Google Cloud | Tagged , | Comments Off on Access Google Cloud services, right from IntelliJ IDEA

Drilling down into Stackdriver Service Monitoring

http://feedproxy.google.com/~r/ClPlBl/~3/WrZQTBTNOWk/drilling-down-into-stackdriver-service-monitoring.html

Posted in Google Cloud | Tagged , | Comments Off on Drilling down into Stackdriver Service Monitoring

Transparent SLIs: See Google Cloud the way your application experiences it

http://feedproxy.google.com/~r/ClPlBl/~3/HATjLhZVggk/transparent-slis-see-google-cloud-the-way-your-application-experiences-it.html

Posted in Google Cloud | Tagged , | Comments Off on Transparent SLIs: See Google Cloud the way your application experiences it

On GCP, your database your way

When choosing a cloud to host your applications, you want a portfolio of database options—SQL, NoSQL, relational, non-relational, scale up/down, scale in/out, you name it—so you can use the right tool for the job. Google Cloud Platform (GCP) offers a full complement of managed database services to address a variety of workload needs, and of course, you can run your own database in Google Compute Engine or Kubernetes Engine if you prefer.

Today, we’re introducing some new database features along with partnerships, beta news and other improvements that can help you get the most out of your databases for your business.

Here’s what we’re announcing today:

  • Oracle workloads can now be brought to GCP
  • SAP HANA workloads can run on GCP persistent-memory VMs
  • Cloud Firestore launching for all users developing cloud-native apps
  • Regional replication, visualization tool available for Cloud Bigtable
  • Cloud Spanner updates, by popular demand

Managing Oracle workloads with Google partners

Until now, it’s been a challenge for customers to bring some of the most common workloads to GCP. Today, we’re excited to announce that we are partnering with managed service providers (MSPs) to provide a fully managed service for Oracle workloads for GCP customers. Partner-managed services like this unlock the ability to run Oracle workloads and take advantage of the rest of the GCP platform. You can run your Oracle workloads on dedicated hardware and you can connect the applications you’re running on GCP.

By partnering with a trusted managed service provider, we can offer fully managed services for Oracle workloads with the same advantages as GCP services. You can select the offering that meets your requirements, as well as use your existing investment in Oracle software licenses.

We are excited to open the doors to customers and partners whose technical requirements do not fit neatly into the public cloud. By working with partners, you’ll have the option to move these workloads to GCP and take advantage of the benefits of not having to manage hardware and software. Learn more about managing your Oracle workloads with Google partners, available this fall.

Partnering with Intel and SAP

This week we announced our collaboration with Intel and SAP to offer Compute Engine virtual machines backed by the upcoming Intel Optane DC Persistent Memory for SAP HANA workloads. Google Compute Engine VMs with this Intel Optane DC persistent memory will offer higher overall memory capacity and lower cost compared to instances with only dynamic random-access memory (DRAM). Google Cloud instances on Intel Optane DC Persistent Memory for SAP HANA and other in-memory database workloads will soon be available through an early access program. To learn more, sign up here.

We’re also continuing to scale our instance size roadmap for SAP HANA production workloads. With 4TB machine types now in general availability, we’re working on new virtual machines that support 12TB of memory by next summer, and 18TB of memory by the end of 2019.

Accelerate app development with Cloud Firestore

For app developers, Cloud Firestore brings the ability to easily store and sync app data at global scale. Today, we’re announcing that we’ll soon expand the availability of the Cloud Firestore beta to more users by bringing the UI to the GCP console. Cloud Firestore is a serverless, NoSQL document database that simplifies storing, syncing and querying data for your cloud-native apps at global scale. Its client libraries provide live synchronization and offline support, while its security features and integrations with Firebase and GCP accelerate building truly serverless apps.

We’re also announcing that Cloud Firestore will support Datastore Mode in the coming weeks. Cloud Firestore, currently available in beta, is the next generation of Cloud Datastore, and offers compatibility with the Datastore API and existing client libraries. With the newly introduced Datastore mode on Cloud Firestore, you don’t need to make any changes to your existing Datastore apps to take advantage of the added benefits of Cloud Firestore. After general availability of Cloud Firestore, we will transparently live-migrate your apps to the Cloud Firestore backend, and you’ll see better performance right away, for the same pricing you have now, with the added benefit of always being strongly consistent. It’ll be a simple, no-downtime upgrade. Read more here about Cloud Firestore.

Simplicity, speed and replication with Cloud Bigtable

For your analytical and operational workloads, an excellent option is Google Cloud Bigtable, a high-throughput, low-latency, and massively scalable NoSQL database. Today, we are announcing that regional replication is generally available. You can easily replicate your Cloud Bigtable data set asynchronously across zones within a GCP region, for additional read throughput, higher durability and resilience in the face of zonal failures. Get more information about regional replication for Cloud Bigtable.

Additionally, we are announcing the beta version of Key Visualizer, a visualization tool for Cloud Bigtable key access patterns. Key Visualizer helps debug performance issues due to unbalanced access patterns across the key space, or single rows that are too large or receiving too much read or write activity. With Key Visualizer, you get a heat map visualization of access patterns over time, along with the ability to zoom into specific key or time ranges, or select a specific row to find the full row key ID that’s responsible for a hotspot. Key Visualizer is automatically enabled for Cloud Bigtable clusters with sufficient data or activity, and does not affect Cloud Bigtable cluster performance. Learn more about using Key Visualizer on our website.

Key Visualizer, now in beta, shows an access pattern heat map so you can debug performance issues in Cloud Bigtable.

Finally, we launched client libraries for Node.js (beta) and C# (beta) this month. We will continue working to provide stronger language support for Cloud Bigtable, and look forward to launching Python (beta), C++ (beta), native Java (beta), Ruby (alpha) and PHP (alpha) client libraries in the coming months. Learn more about Cloud Bigtable client libraries.

Cloud Spanner updates, by popular request

Last year, we launched our Cloud Spanner database, and we’ve already seen customers do proof-of-concept trials and deploy business-critical apps to take advantage of Cloud Spanner’s benefits, which include simplified database administration and management, strong global consistency, and industry-leading SLAs.

Today we’re announcing a number of new updates to Cloud Spanner that our customers have requested. First, we recently announced the general availability of import/export functionality. With this new feature, you can move your data using Apache Avro files, which are transferred with our recently released Apache Beam-based Cloud Dataflow connector. This feature makes Cloud Spanner easier to use for a number of important use cases such as disaster recovery, analytics ingestion, testing and more.

We are also previewing data manipulation language (DML) for Cloud Spanner to make it easier to reuse existing code and tool chains. In addition, you’ll see introspection improvements with Top-N Query Statistics support to help database admins tune performance. DML (in the API as well as in the JDBC driver), and Top-N Query Stats will be released for Cloud Spanner later this year.

Your cloud data is essential to whatever type of app you’re building with GCP. You’ve now got more options than ever when picking the database to power your business.

http://feedproxy.google.com/~r/ClPlBl/~3/2b3Lx6_XeXc/on-gcp-your-database-your-way.html

Posted in Google Cloud | Tagged , | Comments Off on On GCP, your database your way

Announcing resource-based pricing for Google Compute Engine

http://feedproxy.google.com/~r/ClPlBl/~3/ysTBUNeKjzw/announcing-resource-based-pricing-for-google-compute-engine.html

Posted in Google Cloud | Tagged , | Comments Off on Announcing resource-based pricing for Google Compute Engine

Cloud Services Platform: bringing the best of the cloud to you

Posted in Google Cloud | Tagged , | Comments Off on Cloud Services Platform: bringing the best of the cloud to you

Amazon SageMaker Adds Batch Transform Feature and Pipe Input Mode for TensorFlow Containers

At the New York Summit a few days ago we launched two new Amazon SageMaker features: a new batch inference feature called Batch Transform that allows customers to make predictions in non-real time scenarios across petabytes of data and Pipe Input Mode support for TensorFlow containers. SageMaker remains one of my favorite services and we’ve covered it extensively on this blog and the machine learning blog. In fact, the rapid pace of innovation from the SageMaker team is a bit hard to keep up with. Since our last post on SageMaker’s Automatic Model Tuning with Hyper Parameter Optimization, the team launched 4 new built-in algorithms and tons of new features. Let’s take a look at the new Batch Transform feature.

Batch Transform

The Batch Transform feature is a high-performance and high-throughput method for transforming data and generating inferences. It’s ideal for scenarios where you’re dealing with large batches of data, don’t need sub-second latency, or need to both preprocess and transform the training data. The best part? You don’t have to write a single additional line of code to make use of this feature. You can take all of your existing models and start batch transform jobs based on them. This feature is available at no additional charge and you pay only for the underlying resources.

Let’s take a look at how we would do this for the built-in Object Detection algorithm. I followed the example notebook to train my object detection model. Now I’ll go to the SageMaker console and open the Batch Transform sub-console.

From there I can start a new batch transform job.

Here I can name my transform job, select which of my models I want to use, and the number and type of instances to use. Additionally, I can configure the specifics around how many records to send to my inference concurrently and the size of the payload. If I don’t manually specify these then SageMaker will select some sensible defaults.

Next I need to specify my input location. I can either use a manifest file or just load all the files in an S3 location. Since I’m dealing with images here I’ve manually specified my input content-type.

Finally, I’ll configure my output location and start the job!

Once the job is running, I can open the job detail page and follow the links to the metrics and the logs in Amazon CloudWatch.

I can see the job is running and if I look at my results in S3 I can see the predicted labels for each image.

The transform generated one output JSON file per input file containing the detected objects.

From here it would be easy to create a table for the bucket in AWS Glue and either query the results with Amazon Athena or visualize them with Amazon QuickSight.

Of course it’s also possible to start these jobs programmatically from the SageMaker API.

You can find a lot more detail on how to use batch transforms in your own containers in the documentation.

Pipe Input Mode for Tensorflow

Pipe input mode allows customers to stream their training dataset directly from Amazon Simple Storage Service (S3) into Amazon SageMaker using a highly optimized multi-threaded background process. This mode offers significantly better read throughput than the File input mode that must first download the data to the local Amazon Elastic Block Store (EBS) volume. This means your training jobs start sooner, finish faster, and use less disk space, lowering the costs associated with training your models. It has the added benefit of letting you train on datasets beyond the 16 TB EBS volume size limit.

Earlier this year, we ran some experiments with Pipe Input Mode and found that startup times were reduced up to 87% on a 78 GB dataset, with throughput twice as fast in some benchmarks, ultimately resulting in up to a 35% reduction in total training time.

By adding support for Pipe Input Mode to TensorFlow we’re making it easier for customers to take advantage of the same increased speed available to the built-in algorithms. Let’s look at how this works in practice.

First, I need to make sure I have the sagemaker-tensorflow-extensions available for my training job. This gives us the new PipeModeDataset class which takes a channel and a record format as inputs and returns a TensorFlow dataset. We can use this in our input_fn for the TensorFlow estimator and read from the channel. The code sample below shows a simple example.

from sagemaker_tensorflow import PipeModeDataset

def input_fn(channel):
 # Simple example data - a labeled vector.
 features = {
 'data': tf.FixedLenFeature([], tf.string),
 'labels': tf.FixedLenFeature([], tf.int64),
 }
 
 # A function to parse record bytes to a labeled vector record
 def parse(record):
 parsed = tf.parse_single_example(record, features)
 return ({
 'data': tf.decode_raw(parsed['data'], tf.float64)
 }, parsed['labels'])

 # Construct a PipeModeDataset reading from a 'training' channel, using
 # the TF Record encoding.
 ds = PipeModeDataset(channel=channel, record_format='TFRecord')

 # The PipeModeDataset is a TensorFlow Dataset and provides standard Dataset methods
 ds = ds.repeat(20)
 ds = ds.prefetch(10)
 ds = ds.map(parse, num_parallel_calls=10)
 ds = ds.batch(64)
 
 return ds

Then you can define your model and the same way you would for a normal TensorFlow estimator. When it comes to estimator creation time you just need to pass in input_mode='Pipe' as one of the parameters.

Available Now

Both of these new features are available now at no additional charge, and I’m looking forward to seeing what customers can build with the batch transform feature. I can already tell you that it will help us with some of our internal ML workloads here in AWS Marketing.

As always, let us know what you think of this feature in the comments or on Twitter!

Randall

Posted in AWS | Tagged | Comments Off on Amazon SageMaker Adds Batch Transform Feature and Pipe Input Mode for TensorFlow Containers

Now shipping: ultramem machine types with up to 4TB of RAM

Today we are announcing the general availability of Google Compute Engine “ultramem” memory-optimized machine types. You can provision ultramem VMs with up to 160 vCPUs and nearly 4TB of memory–the most vCPUs you can provision on-demand in any public cloud. These ultramem machine types are great for running memory-intensive production workloads such as SAP HANA, while leveraging the performance and flexibility of Google Cloud Platform (GCP).

The ultramem machine types offer the most resources per VM of any Compute Engine machine type, while supporting Compute Engine’s innovative differentiators, including:

SAP-certified for OLAP and OLTP workloads

Since we announced our partnership with SAP in early 2017, we’ve rapidly expanded our support for SAP HANA with new memory-intensive Compute Engine machine types. We’ve also worked closely with SAP to test and certify these machine types to bring you validated solutions for your mission-critical workloads. Our supported VM sizes for SAP HANA now meet the broad range of Google Cloud Platform’s customers’ demands. Over the last year, the size of our certified instances grew by more than 10X for both scale-up and scale-out deployments. With up to 4TB of memory and 160 vCPUs, ultramem machine types are the largest SAP-certified instances on GCP for your OLAP and OLTP workloads.

Maximum memory per node and per cluster for SAP HANA on GCP, over time

We also offer other capabilities to manage your HANA environment on GCP including automated deployments, and Stackdriver monitoring. Click here for a closer look at the SAP HANA ecosystem on GCP.

Up to 70% discount for commited use

We are also excited to share that GCP now offers deeper committed use discounts of up to 70% for memory-optimized machine types, helping you improve your total cost of ownership (TCO) for sustained, predictable usage. This allows you to control costs through a variety of usage models: on-demand usage to start testing machine types, committed use discounts when you are ready for production deployments, and sustained use discounts for mature, predictable usage. For more details on committed use discounts for these machine types check our docs, or use the pricing calculator to assess your savings on GCP.

GCP customers have been doing exciting things with ultramem VMs

GCP customers have been using ultramem VMs for a variety of memory-intensive workloads including in-memory databases, HPC applications, and analytical workloads.

Colgate has been collaborating with SAP and Google Cloud as an early user of ultramem VMs for S/4 HANA.

“As part of our partnership with SAP and Google Cloud, we have been an early tester of Google Cloud’s 4TB instances for SAP solution workloads. The machines have performed well, and the results have been positive. We are excited to continue our collaboration with SAP and Google Cloud to jointly create market changing innovations based upon SAP Cloud Platform running on GCP.”
– Javier Llinas, IT Director, Colgate

Getting started

These ultramem machine types are available in us-central1, us-east1, and europe-west1, with more global regions planned soon. Stay up-to-date on additional regions by visiting our available regions and zones page.

It’s easy to configure and provision n1-ultramem machine types programmatically, as well as via the console. To learn more about running your SAP HANA in-memory database on GCP with ultramem machine types, visit our SAP page, and go to the GCP Console to get started.

Posted in Google Cloud | Tagged , | Comments Off on Now shipping: ultramem machine types with up to 4TB of RAM

New Server Status

Server NameServer IPPortStatusID
louisville34.202.219.1480ONLINE1
chicago35.203.105.3280ONLINE2
matt35.206.124.9780ONLINE3
heritage35.233.246.7453ONLINE4
cody35.231.128.13080ONLINE5
ns2.matt2.infons2.matt2.info53ONLINE
ns1.matt2.infons1.matt2.info53ONLINE7
ns1.data502.comns1.data502.com53ONLINE8
ns2.data502.comns2.data502.com53ONLINE9
ns9.data502.comns9.data502.com53ONLINE10
ns3.data502.comns3.data502.com53OFFLINE11
ns3.matt2.infons3.matt2.info53OFFLINE12
mail.data502.commail.data502.com110ONLINE13
FTP Louisville34.202.219.1421ONLINE14
mysql.data502.comdata502.com3306ONLINE15
SSH LOUISVILLE34.202.219.1422ONLINE16
imap.data502.comimap.data502.com220OFFLINE17
cpanel.data502.comcpanel.data502.com2083ONLINE18
SMTP Chicago34.202.219.1425ONLINE19
FTP Chicago35.203.105.3221ONLINE20
SSH Chicago35.203.105.3222ONLINE21
Posted in AWS, Cloud, Google Cloud, GPC, Hosting, News | Leave a comment

AWS re:Invent 2018 is Coming – Are You Ready?

As I write this, there are just 138 days until re:Invent 2018. My colleagues on the events team are going all-out to make sure that you, our customer, will have the best possible experience in Las Vegas. After meeting with them, I decided to write this post so that you can have a better understanding of what we have in store, know what to expect, and have time to plan and to prepare.

Dealing with Scale
We started out by talking about some of the challenges that come with scale. Approximately 43,000 people (AWS customers, partners, members of the press, industry analysts, and AWS employees) attended in 2017 and we are expecting an even larger crowd this year. We are applying many of the scaling principles and best practices that apply to cloud architectures to the physical, logistical, and communication challenges that are part-and-parcel of an event that is this large and complex.

We want to make it easier for you to move from place to place, while also reducing the need for you to do so! Here’s what we are doing:

Campus Shuttle – In 2017, hundreds of buses traveled on routes that took them to a series of re:Invent venues. This added a lot of latency to the system and we were not happy about that. In 2018, we are expanding the fleet and replacing the multi-stop routes with a larger set of point-to-point connections, along with additional pick-up and drop-off points at each venue. You will be one hop away from wherever you need to go.

Ride Sharing – We are partnering with Lyft and Uber (both powered by AWS) to give you another transportation option (download the apps now to be prepared). We are partnering with the Las Vegas Monorail and the taxi companies, and are also working on a teleportation service, but do not expect it to be ready in time.

Session Access – We are setting up a robust overflow system that spans multiple re:Invent venues, and are also making sure that the most popular sessions are repeated in more than one venue.

Improved Mobile App – The re:Invent mobile app will be more lively and location-aware. It will help you to find sessions with open seats, tell you what is happening around you, and keep you informed of shuttle and other transportation options.

Something for Everyone
We want to make sure that re:Invent is a warm and welcoming place for every attendee, with business and social events that we hope are progressive and inclusive. Here’s just some of what we have in store:

You can also take advantage of our mother’s rooms, gender-neutral restrooms, and reflection rooms. Check out the community page to learn more!

Getting Ready
Now it is your turn! Here are some suggestions to help you to prepare for re:Invent:

  • Register – Registration is now open! Every year I get email from people I have not talked to in years, begging me for last-minute access after re:Invent sells out. While it is always good to hear from them, I cannot always help, even if we were in first grade together.
  • Watch – We’re producing a series of How to re:Invent webinars to help you get the most from re:Invent. Watch What’s New and Breakout Content Secret Sauce ASAP, and stay tuned for more.
  • Plan – The session catalog is now live! View the session catalog to see the initial list of technical sessions. Decide on the topics of interest to you and to your colleagues, and choose your breakout sessions, taking care to pay attention to the locations. There will be over 2,000 sessions so choose with care and make this a team effort.
  • Pay Attention – We are putting a lot of effort into preparatory content – this blog post, the webinars, and more. Watch, listen, and learn!
  • Train – Get to work on your cardio! You can easily walk 10 or more miles per day, so bring good shoes and arrive in peak condition.

Partners and Sponsors
Participating sponsors are a core part of the learning, networking, and after hours activities at re:Invent.

For APN Partners, re:Invent is the single largest opportunity to interact with AWS customers, delivering both business development and product differentiation. If you are interested in becoming a re:Invent sponsor, read the re:Invent Sponsorship Prospectus.

For re:Invent attendees, I urge you to take time to meet with Sponsoring APN Partners in both the Venetian and Aria Expo halls. Sponsors offer diverse skills, Competencies, services and expertise to help attendees solve a variety of different business challenges. Check out the list of re:Invent Sponsors to learn more.

See You There
Once you are on site, be sure to take advantage of all that re:Invent has to offer.

If you are not sure where to go or what to do next, we’ll have some specially trained content experts to guide you.

I am counting down the days, gearing up to crank out a ton of blog posts for re:Invent, and looking forward to saying hello to friends new and old.

Jeff;

PS – We will be adding new sessions to the session catalog over the summer, so be sure to check back every week!

 

Posted in AWS, News | Tagged | Comments Off on AWS re:Invent 2018 is Coming – Are You Ready?

DeepLens Challenge #1 Starts Today – Use Machine Learning to Drive Inclusion

Are you ready to develop and show off your machine learning skills in a way that has a positive impact on the world? If so, get your hands on an AWS DeepLens video camera and join the AWS DeepLens Challenge!

About the Challenge
Working together with our friends at Intel, we are launching the first in a series of eight themed challenges today, all centered around improving the world in some way. Each challenge will run for two weeks and is designed to help you to get some hands-on experience with machine learning.

We will announce a fresh challenge every two weeks on the AWS Machine Learning Blog. Each challenge will have a real-world theme, a technical focus, a sample project, and a subject matter expert. You have 12 days to invent and implement a DeepLens project that resonates with the theme, and to submit a short, compelling video (four minutes or less) to represent and summarize your work.

We’re looking for cool submissions that resonate with the theme and that make great use of DeepLens. We will watch all of the videos and then share the most intriguing ones.

Challenge #1 – Inclusivity Challenge
The first challenge was inspired by the Special Olympics, which took place in Seattle last week. We invite you to use your DeepLens to create a project that drives inclusion, overcomes barriers, and strengthens the bonds between people of all abilities. You could gauge the physical accessibility of buildings, provide audio guidance using Polly for people with impaired sight, or create educational projects for children with learning disabilities. Any project that supports this theme is welcome.

For each project that meets the entry criteria we will make a donation of $249 (the retail price of an AWS DeepLens) to the Northwest Center, a non-profit organization based in Seattle. This organization works to advance equal opportunities for children and adults of all abilities and we are happy to be able to help them to further their mission. Your work will directly benefit this very worthwhile goal!

As an example of what we are looking for, ASLens is a project created by Chris Coombs of Melbourne, Australia. It recognizes and understands American Sign Language (ASL) and plays the audio for each letter. Chris used Amazon SageMaker and Polly to implement ASLens (you can watch the video, learn more and read the code).

To learn more, visit the DeepLens Challenge page. Entries for the first challenge are due by midnight (PT) on July 22nd and I can’t wait to see what you come up with!

Jeff;

PS – The DeepLens Resources page is your gateway to tutorial videos, documentation, blog posts, and other helpful information.

Posted in AWS, News | Tagged | Comments Off on DeepLens Challenge #1 Starts Today – Use Machine Learning to Drive Inclusion

7 best practices for building containers

Kubernetes Engine is a great place to run your workloads at scale. But before being able to use Kubernetes, you need to containerize your applications. You can run most applications in a Docker container without too much hassle. However, effectively running those containers in production and streamlining the build process is another story. There are a number of things to watch out for that will make your security and operations teams happier. This post provides tips and best practices to help you effectively build containers.

1. Package a single application per container

Get more details

A container works best when a single application runs inside it. This application should have a single parent process. For example, do not run PHP and MySQL in the same container: it’s harder to debug, Linux signals will not be properly handled, you can’t horizontally scale the PHP containers, etc. This allows you to tie together the lifecycle of the application to that of the container.

The container on the left follows the best practice. The container on the right does not.

2. Properly handle PID 1, signal handling, and zombie processes

Get more details

Kubernetes and Docker send Linux signals to your application inside the container to stop it. They send those signals to the process with the process identifier (PID) 1. If you want your application to stop gracefully when needed, you need to properly handle those signals.

Google Developer Advocate Sandeep Dinesh’s article —Kubernetes best practices: terminating with grace— explains the whole Kubernetes termination lifecycle.

3. Optimize for the Docker build cache

Get more details

Docker can cache layers of your images to accelerate later builds. This is a very useful feature, but it introduces some behaviors that you need to take into account when writing your Dockerfiles. For example, you should add the source code of your application as late as possible in your Dockerfile so that the base image and your application’s dependencies get cached and aren’t rebuilt on every build.

Take this Dockerfile as example:

FROM python:3.5
COPY my_code/ /src
RUN pip install my_requirements

You should swap the last two lines:

FROM python:3.5
RUN pip install my_requirements
COPY my_code/ /src

In the new version, the result of the pip command will be cached and will not be rerun each time the source code changes.

4. Remove unnecessary tools

Get more details

Reducing the attack surface of your host system is always a good idea, and it’s much easier to do with containers than with traditional systems. Remove everything that the application doesn’t need from your container. Or better yet, include just your application in a distroless or scratch image. You should also, if possible, make the filesystem of the container read-only. This should get you some excellent feedback from your security team during your performance review.

5. Build the smallest image possible

Get more details

Who likes to download hundreds of megabytes of useless data? Aim to have the smallest images possible. This decreases download times, cold start times, and disk usage. You can use several strategies to achieve that: start with a minimal base image, leverage common layers between images and make use of Docker’s multi-stage build feature.

The Docker multi-stage build process.

Google Developer Advocate Sandeep Dinesh’s article —Kubernetes best practices: How and why to build small container images— covers this topic in depth.

6. Properly tag your images

Get more details

Tags are how the users choose which version of your image they want to use. There are two main ways to tag your images: Semantic Versioning, or using the Git commit hash of your application. Whichever your choose, document it and clearly set the expectations that the users of the image should have. Be careful: while users expect some tags —like the “latest” tag— to move from one image to another, they expect other tags to be immutable, even if they are not technically so. For example, once you have tagged a specific version of your image, with something like “1.2.3”, you should never move this tag.

7. Carefully consider whether to use a public image

Get more details

Using public images can be a great way to start working with a particular piece of software. However, using them in production can come with a set of challenges, especially in a high-constraint environment. You might need to control what’s inside them, or you might not want to depend on an external repository, for example. On the other hand, building your own images for every piece of software you use is not trivial, particularly because you need to keep up with the security updates of the upstream software. Carefully weigh the pros and cons of each for your particular use-case, and make a conscious decision.

Next steps

You can read more about those best practices on Best Practices for Building Containers, and learn more about our Kubernetes Best Practices. You can also try out our Quickstarts for Kubernetes Engine and Container Builder.

Posted in Google Cloud | Tagged , | Comments Off on 7 best practices for building containers

Predict your future costs with Google Cloud Billing cost forecast

With every new feature we introduce to Google Cloud Billing, we strive to provide your business with greater flexibility, control, and clarity so that you can better align your strategic priorities with your cloud usage. In order to do so, it’s important to be able to answer key questions about your cloud costs, such as:

  • “How is my current month’s Google Cloud Platform (GCP) spending trending?”
  • “How much am I forecasted to spend this month based on historical trends?”
  • “Which GCP product or project is forecasted to cost me the most this month?”

Today, we are excited to announce the availability of a new cost forecast feature for Google Cloud Billing. This feature makes it easier to see at a glance how your costs are trending and how much you are projected to spend. You can now forecast your end-of-month costs for whatever bucket of spend is important to you, from your entire billing account down to a single SKU in a single project.

View your current and forecasted costs

Get started

Cost forecast for Google Cloud Billing is now available to all accounts. Get started by navigating to your account’s billing page in the GCP console and opening the reports tab in the left-hand navigation bar.

You can learn more about the cost forecast feature in the billing reports documentation. Also, if you’re attending Google Cloud Next ‘18, check out our session on Monitoring and Forecasting Your GCP Costs.

Related content

Posted in Google Cloud | Tagged , | Comments Off on Predict your future costs with Google Cloud Billing cost forecast

Introducing Jib — build Java Docker images better

Containers are bringing Java developers closer than ever to a “write once, run anywhere” workflow, but containerizing a Java application is no simple task: You have to write a Dockerfile, run a Docker daemon as root, wait for builds to complete, and finally push the image to a remote registry. Not all Java developers are container experts; what happened to just building a JAR?

To address this challenge, we’re excited to announce Jib, an open-source Java containerizer from Google that lets Java developers build containers using the Java tools they know. Jib is a fast and simple container image builder that handles all the steps of packaging your application into a container image. It does not require you to write a Dockerfile or have docker installed, and it is directly integrated into Maven and Gradle—just add the plugin to your build and you’ll have your Java application containerized in no time.

Docker build flow:

Jib build flow:

How Jib makes development better:

Jib takes advantage of layering in Docker images and integrates with your build system to optimize Java container image builds in the following ways:

  1. Simple – Jib is implemented in Java and runs as part of your Maven or Gradle build. You do not need to maintain a Dockerfile, run a Docker daemon, or even worry about creating a fat JAR with all its dependencies. Since Jib tightly integrates with your Java build, it has access to all the necessary information to package your application. Any variations in your Java build are automatically picked up during subsequent container builds.
  2. Fast – Jib takes advantage of image layering and registry caching to achieve fast, incremental builds. It reads your build config, organizes your application into distinct layers (dependencies, resources, classes) and only rebuilds and pushes the layers that have changed. When iterating quickly on a project, Jib can save valuable time on each build by only pushing your changed layers to the registry instead of your whole application.
  3. Reproducible – Jib supports building container images declaratively from your Maven and Gradle build metadata, and as such can be configured to create reproducible build images as long as your inputs remain the same.

How to use Jib to containerize your application

Jib is available as plugins for Maven and Gradle and requires minimal configuration. Simply add the plugin to your build definition and configure the target image. If you are building to a private registry, make sure to configure Jib with credentials for your registry. The easiest way to do this is to use credential helpers like docker-credential-gcr. Jib also provides additional rules for building an image to a Docker daemon if you need it.

Jib on Maven

<plugin> <groupId>com.google.cloud.tools</groupId> <artifactId>jib-maven-plugin</artifactId> <version>0.9.0</version> <configuration> <to>  </to> </configuration>
</plugin>
# Builds to a container image registry.
$ mvn compile jib:build
# Builds to a Docker daemon.
$ mvn compile jib:dockerBuild

Jib on Gradle

plugins { id 'com.google.cloud.tools.jib' version '0.9.0'
}
jib.to.image = 'gcr.io/my-project/image-built-with-jib'
# Builds to a container image registry.
$ gradle jib
# Builds to a Docker daemon.
$ gradle jibDockerBuild

We want everyone to use Jib to simplify and accelerate their Java development. Jib works with most cloud providers; try it out and let us know what you think at github.com/GoogleContainerTools/jib.

Posted in Google Cloud | Tagged , | Comments Off on Introducing Jib — build Java Docker images better

Fascinating Frank Abagnale: Catch Me If You Can | Talks At Google

[youtube https://www.youtube.com/watch?v=vsMydMDi3rI]

Frank Abagnale depicted by Leonardo deCaprio in Catch Me If You Can speaks at Google about his life and cyber crime.

We’ve all seen the movie or at least we all should have the teen who runs away and becomes everything from an airline pilot to an ER doctor to an Asst. District Attorney all before he turns 21. He’s a cyber crime expert these days and his experience in forgery and documents is fascinatingly from the other side of most experts.

Posted in Cloud, cloud502, Data, security | Tagged , , | Comments Off on Fascinating Frank Abagnale: Catch Me If You Can | Talks At Google

SEO Tools: Visualize Pages And Their Interrelational Linkage

Anyone who knows me knows I love visualizations. I’ve been working on SEO strategies for a site and the beginning of any good strategy is understand the site and quality of its content. Some studies suggest 60% of people think in pictures…that is to say if I said the word “cat” some people will see a cat and some will see the letters cat. Its hard for visual people to grasp complex concepts such as “millions” its harder to produce that word in a visual way in your mind. Similarly when we think about a web site its often hard to grasp the links and relationship between a sites pages. Sure we can create a boring flow chart but it often has to be over simplified and lacks a good representation of the sites content.

internal site links visualization

internal site links visualization

Recently while working for https://pearlharbor.org I found a tool that created some really useful visualizations of the site and the sites pages with linkage between them. I believe the software limits its linkage to 10,000 which is astounding. The site offers information and Pearl Harbor tours while serving as a memorial that includes a page for all the survivors. My focus with this site is to make SEO recommendations and ensure the effective utilization of social media in improving site ranking.

A website visualization for SEO purposes

A website visualization for SEO purposes

The free plug for this software is Website Auditor you can use the free evaluation version for as long as you’d like, you just cant save or print anything. Its still something extraordinary for understanding a sites layout. In addition this software includes additional tools for site link building and so on.

I’m excited about my newly discovered free SEO tools. I’m going to make these images for any site I work on. While I want to improve SEO through increased authoritative external links its essential that these links land on a site that utilizes intelligent internal linkage.

Posted in Cloud, cloud502, Data, Matthew Leffler, SEO, visualization | Tagged , , , , , | Comments Off on SEO Tools: Visualize Pages And Their Interrelational Linkage

I Have To Share The SEO Love In Oahu

pearlharbor.org

I am happy to be working with a repected tour site a site that blends history, patriotism and tours in Hawaii’s Pearl Harbor. Principally I am concerned with increasing the ever important backlinks to propel the site to the top of Google page rankings.

Backlinks are where unrelated sites link to another, giving it relevancy in Google’s eyes. They are like votes of confidence. Keywords have lost their stature in SEO because too often sites capitalized on the over use of a keyword mkaing the content poor for users and only good for search results.

Miserable Failure

Remember when searching Google for “miserable failure” took you to the White House Bio of George Bush? Thats where a backlink campaign associated the term miserable failure to President’s page. Now those days are gone but you can still see the rationale in backlinks defining the anchor text. If hundreds of site linked to your page and in the text of the link they said “Horrible web design” Google would wonder why all of these random sites consider yours so bad.

If you plan to visit Hawaii or know someone who is do check out the tours and feel free to share it with friends and family. The site includes the standard packages you would expect but also includes tours on more than just Oahu. Private tours and as well as helicopter tours add greater adventure while still showing respect for the dead entombed still today.  The USS Arizona is both a graveyard and a memorial. You can few individual pages for all of the survivors and the victims, some pages need content and if you know of anyone concerned with the the attack please pass them the link.

Remember Pearl Harbor

Search queries that bring attention to the site include “how many people died at pearl harbor” or some variation to that … by the way a total of 2403 people died.

I’m looking forward to working on this project.

Matthew

Posted in Cloud, cloud502, Data, hawaii, Matthew Leffler, pearl harbor, remember, tours | Tagged , , , , , , , | Comments Off on I Have To Share The SEO Love In Oahu

AWS Heroes – New Categories Launch

As you may know, in 2014 we launched the AWS Community Heroes program to recognize a vibrant group of AWS experts. These standout individuals use their extensive knowledge to teach customers and fellow-techies about AWS products and services across a range of mediums. As AWS grows, new groups of Heroes emerge.

Today, we’re excited to recognize prominent community leaders by expanding the AWS Heroes program. Unlike Community Heroes (who tend to focus on advocating a wide-range of AWS services within their community), these new Heroes are specialists who focus their efforts and advocacy on a specific technology. Our first new heroes are the AWS Serverless Heroes and AWS Container Heroes. Please join us in welcoming them as the passion and enthusiasm for AWS knowledge-sharing continues to grow in technical communities.

AWS Serverless Heroes

Serverless Heroes are early adopters and spirited pioneers of the AWS serverless ecosystem. They evangelize AWS serverless technologies online and in-person as well as open source contributions to GitHub and the AWS Serverless Application Repository, these Serverless Heroes help evolve the way developers, companies, and the community at large build modern applications. Our initial cohort of Serverless Heroes includes:

Yan Cui

Aleksandar Simovic

Forrest Brazeal

Marcia Villalba

Erica Windisch

Peter Sbarski

Slobodan Stojanović

Rob Gruhl

Michael Hart

Ben Kehoe

Austen Collins

Announcing AWS Container Heroes

Container Heroes are prominent trendsetters who are deeply connected to the ever-evolving container community. They possess extensive knowledge of multiple Amazon container services, are always keen to learn the latest trends, and are passionate about sharing their insights with anyone running containers on AWS. Please meet the first AWS Container Heroes:

Casey Lee

Tung Nguyen

Philipp Garbe

Yusuke Kuoka

Mike Fiedler

The trends within the AWS community are ever-changing.  We look forward to recognizing a wide variety of Heroes in the future. Stay tuned for additional updates to the Hero program in coming months, and be sure to visit the Heroes website to learn more.

Posted in AWS | Tagged | Comments Off on AWS Heroes – New Categories Launch

How Do You SEO? A Resource For Friends

seo guide

seo guide

There are a ton of SEO sites free tool and even more that aren’t free. Usually the free ones are just teasers for the paid tools. I’ve collected links to free tools that I have found relevance by using them. Beyond the tools to optimize a sites SEO its important to have at least a foundational understanding of what is SEO. It isn’t what it used to be, sites would simply pick the keywords they wanted to rank for and then repeat that keyword over and over sometimes at the bottom of a page or behind an image where online a search engine would find them. Keywords allowed gamification of rankings and Google stepped away from keywords focusing more on backlinks, links that pointed back to your site.

Before you jump onto these resources do spend sometime reviewing the links below. There are common mistakes and sites can be penalized for appearing to game the system. The first step to proper SEO is quality content. Fix errors and give them something of value too link too.

In order to do SEO here are some backlinks to a few sites that I’ve found useful.

Posted in Cloud, cloud502, Data, Matthew Leffler | Tagged , , | Comments Off on How Do You SEO? A Resource For Friends

Introducing Endpoint Verification: visibility into the desktops accessing your enterprise applications

Posted in Google Cloud | Tagged , | Comments Off on Introducing Endpoint Verification: visibility into the desktops accessing your enterprise applications

AWS Online Tech Talks – July 2018

Join us this month to learn about AWS services and solutions featuring topics on Amazon EMR, Amazon SageMaker, AWS Lambda, Amazon S3, Amazon WorkSpaces, Amazon EC2 Fleet and more! We also have our third episode of the “How to re:Invent” where we’ll dive deep with the AWS Training and Certification team on Bootcamps, Hands-on Labs, and how to get AWS Certified at re:Invent. Register now! We look forward to seeing you. Please note – all sessions are free and in Pacific Time.

Tech talks featured this month:

Analytics & Big Data

July 23, 2018 | 11:00 AM – 12:00 PM PT – Large Scale Machine Learning with Spark on EMR – Learn how to do large scale machine learning on Amazon EMR.

July 25, 2018 | 01:00 PM – 02:00 PM PT – Introduction to Amazon QuickSight: Business Analytics for Everyone – Get an introduction to Amazon Quicksight, Amazon’s BI service.

July 26, 2018 | 11:00 AM – 12:00 PM PT – Multi-Tenant Analytics on Amazon EMR – Discover how to make an Amazon EMR cluster multi-tenant to have different processing activities on the same data lake.

Compute

July 31, 2018 | 11:00 AM – 12:00 PM PT – Accelerate Machine Learning Workloads Using Amazon EC2 P3 Instances – Learn how to use Amazon EC2 P3 instances, the most powerful, cost-effective and versatile GPU compute instances available in the cloud.

August 1, 2018 | 09:00 AM – 10:00 AM PT – Technical Deep Dive on Amazon EC2 Fleet – Learn how to launch workloads across instance types, purchase models, and AZs with EC2 Fleet to achieve the desired scale, performance and cost.

Containers

July 25, 2018 | 11:00 AM – 11:45 AM PT – How Harry’s Shaved Off Their Operational Overhead by Moving to AWS Fargate – Learn how Harry’s migrated their messaging workload to Fargate and reduced message processing time by more than 75%.

Databases

July 23, 2018 | 01:00 PM – 01:45 PM PT – Purpose-Built Databases: Choose the Right Tool for Each Job – Learn about purpose-built databases and when to use which database for your application.

July 24, 2018 | 11:00 AM – 11:45 AM PT – Migrating IBM Db2 Databases to AWS – Learn how to migrate your IBM Db2 database to the cloud database of your choice.

DevOps

July 25, 2018 | 09:00 AM – 09:45 AM PT – Optimize Your Jenkins Build Farm – Learn how to optimize your Jenkins build farm using the plug-in for AWS CodeBuild.

Enterprise & Hybrid

July 31, 2018 | 09:00 AM – 09:45 AM PT – Enable Developer Productivity with Amazon WorkSpaces – Learn how your development teams can be more productive with Amazon WorkSpaces.

August 1, 2018 | 11:00 AM – 11:45 AM PT – Enterprise DevOps: Applying ITIL to Rapid Innovation – Innovation doesn’t have to equate to more risk for your organization. Learn how Enterprise DevOps delivers agility while maintaining governance, security and compliance.

IoT

July 30, 2018 | 01:00 PM – 01:45 PM PT – Using AWS IoT & Alexa Skills Kit to Voice-Control Connected Home Devices – Hands-on workshop that covers how to build a simple backend service using AWS IoT to support an Alexa Smart Home skill.

Machine Learning

July 23, 2018 | 09:00 AM – 09:45 AM PT – Leveraging ML Services to Enhance Content Discovery and Recommendations – See how customers are using computer vision and language AI services to enhance content discovery & recommendations.

July 24, 2018 | 09:00 AM – 09:45 AM PT – Hyperparameter Tuning with Amazon SageMaker’s Automatic Model Tuning – Learn how to use Automatic Model Tuning with Amazon SageMaker to get the best machine learning model for your datasets, to tune hyperparameters.

July 26, 2018 | 09:00 AM – 10:00 AM PT – Build Intelligent Applications with Machine Learning on AWS – Learn how to accelerate development of AI applications using machine learning on AWS.

re:Invent

July 18, 2018 | 08:00 AM – 08:30 AM PT – Episode 3: Training & Certification Round-Up – Join us as we dive deep with the AWS Training and Certification team on Bootcamps, Hands-on Labs, and how to get AWS Certified at re:Invent.

Security, Identity, & Compliance

July 30, 2018 | 11:00 AM – 11:45 AM PT – Get Started with Well-Architected Security Best Practices – Discover and walk through essential best practices for securing your workloads using a number of AWS services.

Serverless

July 24, 2018 | 01:00 PM – 02:00 PM PT – Getting Started with Serverless Computing Using AWS Lambda – Get an introduction to serverless and how to start building applications with no server management.

Storage

July 30, 2018 | 09:00 AM – 09:45 AM PT – Best Practices for Security in Amazon S3 – Learn about Amazon S3 security fundamentals and lots of new features that help make security simple.

Posted in Analytics, AWS | Tagged | Comments Off on AWS Online Tech Talks – July 2018

AWS Lambda Adds Amazon Simple Queue Service to Supported Event Sources

We can now use Amazon Simple Queue Service (SQS) to trigger AWS Lambda functions! This is a stellar update with some key functionality that I’ve personally been looking forward to for more than 4 years. I know our customers are excited to take it for a spin so feel free to skip to the walk through section below if you don’t want a trip down memory lane.

SQS was the first service we ever launched with AWS back in 2004, 14 years ago. For some perspective, the largest commercial hard drives in 2004 were around 60GB, PHP 5 came out, Facebook had just launched, the TV show Friends ended, GMail was brand new, and I was still in high school. Looking back, I can see some of the tenets that make AWS what it is today were present even very early on in the development of SQS: fully managed, network accessible, pay-as-you-go, and no minimum commitments. Today, SQS is one of our most popular services used by hundreds of thousands of customers at absolutely massive scales as one of the fundamental building blocks of many applications.

AWS Lambda, by comparison, is a relative new kid on the block having been released at AWS re:Invent in 2014 (I was in the crowd that day!). Lambda is a compute service that lets you run code without provisioning or managing servers and it launched the serverless revolution back in 2014. It has seen immediate adoption across a wide array of use-cases from web and mobile backends to IT policy engines to data processing pipelines. Today, Lambda supports Node.js, Java, Go, C#, and Python runtimes letting customers minimize changes to existing codebases and giving them flexibility to build new ones. Over the past 4 years we’ve added a large number of features and event sources for Lambda making it easier for customers to just get things done. By adding support for SQS to Lambda we’re removing a lot of the undifferentiated heavy lifting of running a polling service or creating an SQS to SNS mapping.

Let’s take a look at how this all works.

Triggering Lambda from SQS

First, I’ll need an existing SQS standard queue or I’ll need to create one. I’ll go over to the AWS Management Console and open up SQS to create a new queue. Let’s give it a fun name. At the moment the Lambda triggers only work with standard queues and not FIFO queues.

Now that I have a queue I want to create a Lambda function to process it. I can navigate to the Lambda console and create a simple new function that just prints out the message body with some Python code like this:

def lambda_handler(event, context): for record in event['Records']: print(record['body'])

Next I need to add the trigger to the Lambda function, but before I can do that I need to make sure my AWS Identity and Access Management (IAM) execution role for the function has the correct permissons to talk to SQS. The details of creating that role can be found in our documenation. With the correct permissions in place I can add the SQS trigger by selecting SQS in the triggers section on the left side of the console. I can select which queue I want to use to invoke the Lambda and the maximum number of records a single Lambda will process (up to 10, based on the SQS ReceiveMessage API).

Lambda will automatically scale out horizontally consume the messages in my queue. Lambda will try to consume the queue as quickly and effeciently as possible by maximizing concurrency within the bounds of each service. As the queue traffic fluctuates the Lambda service will scale the polling operations up and down based on the number of inflight messages. I’ve covered this behavior in more detail in the additional info section at the bottom of this post. In order to control the concurrency on the Lambda service side I can increase or decrease the concurrent execution limit for my function.

For each batch of messages processed if the function returns successfully then those messages will be removed from the queue. If the function errors out or times out then the messages will return to the queue after the visibility timeout set on the queue. Just as a quick note here, our Lambda function timeout has to be lower than the queue’s visibility timeout in order to create the event mapping from SQS to Lambda.

After adding the trigger, I can make any other changes I want to the function and save it. If we hop back over to the SQS console we can see the trigger is registered. I can create, configure, and edit the trigger from the SQS console as well.

Now that I have the trigger set up I’ll use the AWS CLI to enqueue a simple message and test the functionality:

aws sqs send-message --queue-url https://sqs.us-east-2.amazonaws.com/123456789/SQS_TO_LAMBDA_IS_FINALLY_HERE_YAY --message-body "hello, world"

My Lambda receives the message and executes the code printing the message payload into my Amazon CloudWatch logs.

Of course all of this works with AWS SAM out of the box.

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Example of processing messages on an SQS queue with Lambda
Resources: MySQSQueueFunction: Type: AWS::Serverless::Function Properties: Runtime: python3.6 CodeUri: src/ Events: MySQSEvent: Type: SQS Properties: Queue: !GetAtt MySqsQueue.Arn BatchSize: 10 MySqsQueue: Type: AWS::SQS::Queue

Additional Information

There are no additional charges for this feature, but because the Lambda service is continuously long-polling the SQS queue the account will be charged for those API calls at the standard SQS pricing rates.

So, a quick deep dive on concurrency and automatic scaling here – just keep in mind that this behavior could change. The automatic scaling behavior of Lambda is designed to keep polling costs low when a queue is empty while simultaneously letting us scale up to high throughput when the queue is being used heavily. When an SQS event source mapping is initially created and enabled, or when messages first appear after a period with no traffic, then the Lambda service will begin polling the SQS queue using five parallel long-polling connections. The Lambda service monitors the number of inflight messages, and when it detects that this number is trending up, it will increase the polling frequency by 20 ReceiveMessage requests per minute and the function concurrency by 60 calls per minute. As long as the queue remains busy it will continue to scale until it hits the function concurrency limits. As the number of inflight messages trends down Lambda will reduce the polling frequency by 10 ReceiveMessage requests per minute and decrease the concurrency used to invoke our function by 30 calls per-minute.

The documentation is up to date with more info than what’s contained in this post. You can find an example SQS event payload there as well. You can find more details from the SQS side in their documentation.

This feature is immediately available in all regions where Lambda is available.

As always, we’re excited to hear feedback about this feature either on Twitter or in the comments below. Finally, I just want to give a quick shout out to the Lambda team members who put a lot of thought into the integration of these two services.

Randall

Posted in AWS | Tagged | Comments Off on AWS Lambda Adds Amazon Simple Queue Service to Supported Event Sources