AWS news

World latest news

AWS and Mongo and Open Source

AWS and MongoDB and Open SourceMany people seem to think that the AWS and MongoDB story is about cloud providers and Open Source business models — but the core of it is in fact about cloud providers and selling software licenses. Many companies around Open Source projects make money by also selling licenses to some additional proprietary software but all companies around proprietary software sell licenses.Selling licenses is hard in the presence of cloud providers because cloud providers have a better product, they sell the whole thing, while a license is only a part of the solution. People don’t want to buy a quarter-inch drill, they want a quarter-inch hole.Let me quote Ben Thompson from the linked article:There is a secular shift in enterprise computing moving to the cloud, not because it is necessarily cheaper (although costs are more closely aligned to usage), but because performance, scalability, and availability are hard problems that have little to do with the core competency and point of differentiation of most companies.This is why Microsoft tries to become a cloud provider instead of a software license seller.In other words: Imagine that MongoDB was entirely proprietary. Would that make MongoDB business model any better? Would Amazon pay for its license? Economically there would not be any difference with the present situation and Amazon would do the same thing as it has done in reality — it would just rewrite the whole thing and not pay the royalties. I suspect that it is not even about the fees — it is about control. Remember embrace and extend?AWS and Mongo and Open Source was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Hackernoon

AWS: Creating APIs Using Go Part1

AWS: Creating APIs Using Go Part 1AWS AndDymanoDB SetupOver the next few months, I will be writing about how to use AWS and Go to create web applications. For this first post, I’m going to be writing about how to develop microservices using Go, AWS Lambda Functions, the AWS API Gateway, and DymanoDB.What Are We Going To Build?We are going to be building a simple contact/address book application. This will allow the end-user to create a new contact, view contacts, search for contacts, edit contacts, and delete contacts.But in this first post, we will only be setting up AWS account and creating our database.Signing Up For AWS Free-Tier AccountI’m not going to walk you through how to signup for an AWS because this is a well-documented process. You can use this link to set up your account.Setup AWS CLIBefore installing AWS you should create a new account for yourself and download your security credentials.You can find information here about how to install AWS CLI on your platform here.Create The Contacts TableWe will be using DymanoDB for our database. To create a new table login the AWS console and search for DymanoDB from the list of services.Searching for DyanamoDBAfter selecting DyanamoDB click, the create table button. This will bring up a page to create a new table. Enter a table name of contacts and enter the name ID for the Partition key and select String from the dropdown list of datatypes.After this click the create button.Add Items To The DatabaseTo create a new item select the new Items and click the create item button.For our contact item, we want an id, name, phone, email company and title values. Enter values like in the image below and click the save button.You should create a few more test records using the steps above.ConclusionIn this first post, we cover how to set up AWS and how to create a DymanoDB table to hold our contacts. In the upcoming posts, I will cover how to create functions to add, get, and remove data from the database.Originally published at harrisonbrock.comAWS: Creating APIs Using Go Part1 was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Hackernoon

Running Laravel Artisan commands on AWS Fargate

I’ve recently published my thoughts on running a scalable, highly available Laravel project behind AWS VPC. In this post I want to talk about running Laravel Artisan commands on Fargate.IntroductionAWS Fargate is an AWS managed service that allows us to deploy Docker containers on ECS without having to manage the underlying infrastructure (EC2 cluster). That means it’s no longer possible to use a bastion host to get shell access to one of the container instances and interact with the container via docker exec.One key concept that allows artisan to run on Fargate is the Docker Command, which is responsible for executing the command that gives the container a reason to exist. For instance, when running Apache on an alpine image, the command will be httpd -DFOREGROUND. This command holds the container running indefinitely because it’s implementation is an event loop that is suppose to never finish. If we prepare a Docker image with a command of php artisan my:command, then this command will be executed as soon as the container starts. If the command ever finishes with a status code of 0, it means it successfully executed and the container is now ready for a graceful shutdown. This is specially good because AWS Fargate charges on a per-minute basis, which means we are able to start a container and pay for only a few minutes while the command is being executed.ImplementationThe first step for implementing a Docker image to run artisan on Fargate is Docker Multi Stage Build. The following Dockerfile is divided into Base, Dependencies, Artisan and App.+--------------+------------------------------------------------+| Layer | Description |+--------------+------------------------------------------------+| base | Operating System dependencies / PHP Extensions || dependencies | Application Dependencies / Composer || artisan | Layer used for running Artisan commands || app | Web Application |+--------------+------------------------------------------------+https://medium.com/media/c352626c21c624811b1efa098ac03cbf/hrefBy using multi stage build, we can choose which stages we want to push to AWS Elastic Container Registry (ECR). The following snippet is an example of a buildspec.yaml used by AWS CodeBuild in conjuction with AWS CodePipeline.https://medium.com/media/bbf9b68cf1df8f47f2ac9b19a442acc8/hrefNotice that AWS ECR now has two images, one for the app and another forartisan.Note: the configuration of AWS ECR, CodeBuild and CodePipeline are beyond the scope of this article. The following steps assumes that a Task Definition has been created for the artisan image.After creating a Task Definition for the Artisan image it will be possible to use AWS CLI to start a container to run an artisan registered command. Make sure to fill your_cluster_name, your_task_definition_name-artisan, your_aws_profile and the correct path for a network-configuration file. Below you’ll find a sample of a network.json.https://medium.com/media/0a252c42d39e45f7904ec68d14837658/hrefhttps://medium.com/media/a309a97fbdd4e296f6d6b800cd946351/hrefUsageYou can now run commands via sh artisan.sh my:command. By issuing this command, the bash script will instruct AWS CLI to start a new container on Fargate with a Docker Command override that runs php /app/artisan $1 where $1 is the first argument, in this case my:command. Amazon will start a container on your ECS Cluster with this command and, as soon as the command finishes, the container will exit. You should be able to redirect any information from stderr and stdout to CloudWatch to be able to figure out any output message the command gave you.The command you desire to run should already be inside the source code that was pushed to AWS ECR and tagged with -artisan suffix. This means that as soon as you write your custom command, you’ll have to push your code changes first and only then run the bash script.ConclusionAs mentioned on my previous post, this is particularly important for running long-running processes without being subjected to HTTP Request limitations. It also enforces a due deployment process before being able to run arbitrary commands on your production VPC, which works well if your workplace have a well established Code Review process before pushing to production. Not having to write API endpoints to run one-off processes after a release cycle improved the development process greatly for me, specially because I can comfortably work with the Laravel mindset of easy console commands provided by Artisan.This process will allow for a team to run php artisan migrate on production without having to open Amazon RDS to the outside of the VPC. However, since migrate is a command that is constantly executed upon every release, I plan on writing another post explaining how to prepare a Task Definition dedicated for migration that will start automatically after every deploy.If you enjoy reading about how I deploy microservices on AWS Fargate behind a VPC, follow me on https://hackernoon.com/@deleugpn and https://twitter.com/@deleugyn.Running Laravel Artisan commands on AWS Fargate was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Hackernoon

Top 5 Amazon Web Services or AWS Courses to Learn Online — FREE and Best of Lot

Top 5 Amazon Web Services or AWS Courses to Learn Online — FREE and Best of LotA list of some free AWS courses to learn Amazon Web Services online at your own paceHello guys, if you want to learn Amazon Web Services, popularly known as AWS and looking for some awesome resources e.g. books, courses, and tutorials then you have come to the right place.In this article, I am going to share some of the best Amazon Web Services or AWS courses which will help you to learn this revolutionary and valuable technology free of cost.Unlike other free courses, these are genuine free AWS courses which are made free by their authors and instructors for the promotional and educational purpose.You just need to enroll with them and then you can learn AWS at any time, at any place and on your own schedule.But, if you are completely new to AWS domain or Cloud let me give you a brief overview of Amazon Web Services and its benefits over traditional infrastructure setup.What is Amazon Web Service (AWS)? BenefitsThe AWS is nothing but an infrastructure service provided by Amazon. It’s a revolutionary change because it allows you to develop an application without worrying about hardware, network, database and other physical infrastructure you need to run your application. For example, if you want to develop an online application for your business, you need a lot of servers, database, and other infrastructure.You need to rent a data center, buy servers, routers, database, and other stuff to get the start, which is pain and pose a big hurdle for many entrepreneurs. AWS solves that problem by renting their infrastructure and servers with a nominal cost of what you incur in setting up yourself. Amazon has built many data center around the world to support their core business i.e. the E-Commerce business which powers Amazon.com and AWS just emerge from that.AWS allows Amazon to monetize its massive infrastructure by renting out to people and business who needs that.It created the phenomenon of Infrastructure as Service because now you only need to pay for infrastructure you are actually using. For example, if you set up your own data center and buy 10 servers and 10 databases but end up using only 5 of them then remaining are waste of money and they also cost in terms of maintenance. With Amazon Web Service, you can quickly get rid of them. Similarly, you can scale pretty quickly if you are hosting your application on cloud i.e. on Amazon Web Service.If you see that your traffic is increasing then you can quickly order new servers and boom your new infrastructure is ready in hours unlike days and months with the traditional approach. You also don’t need to hire UNIX admins, Database Administrator, Network admins, Storage guys etc, All that is done by Amazon and because Amazon is doing it in a scale, it can offer the same service at much lower cost. In short, Amazon Web Service gives birth to the concept of Cloud which allows you to bring your business online without worrying about hardware and infrastructure which powers them.Top 5 Courses to Learn Amazon Web Service (AWS)Now that we know what is AWS and what are the benefits it offers in terms of Infrastructure as service, it’s time to learn different Amazon service in depth and that’s where these courses will help you. You can join these courses if you want to learn about AWS and Cloud in general or you are preparing for various AWS certifications like AWS Solutions Architect, AWS SysOps Admin, or AWS Developer (associate). These courses will help you to kick-start your journey to the beautiful world of AWS. 1. Amazon Web Services — Learning and Implementing AWS SolutionThis is one of the best course to learn Amazon Web Service and it’s FREE. It follows the principle of learning by example and real-world scenarios and that reflects in their course. This is a short course, having just 2 hours worth of material but it’s power packed and intense. There is no nonsense talk or flipping, the instructor Dhruv Bias always means business. Even if you check the preview chapter you will learn a lot about what is AWS and what problem it solves, what benefits it offers and why should you learn it. The course is divided into 5 sections, in the first section you will get an Introduction of AWS and Overview of the course while remaining section focus on different Amazon Web Service offering e.g. Amazon S3(Simple Storage Service), Amazon AWS EC2 (Elastic Cloud Computer) and Databases like AWS DynamoDB or RDB. Overall a great free course to learn what is AWS and its different services. I highly recommend this course to any programmer who wants to learn about Cloud and Amazon Web Service (AWS).2. AWS ConceptsThis is another awesome free course to learn Amazon Web Service on Udemy. It’s from LinuxAcademy and taught by instructor Thomas Haslet.The series is actually divided into 2 courses: AWS Concepts and AWS Essentials. This is the first part while the next course, which is also free is the second part of this series. In this course, you will learn the concepts of Cloud Computing and Amazon Web Service from instructor Thomas Haslet who is also a certified AWS developer. He holds all three AWS certification for associate level e.g.AWS Solutions Architect (associate)AWS SysOps Admin (associate)AWS Developer (associate)This course is for the absolute beginner, someone who has never heard about Cloud or AWS but understand what is hardware, server, database and why you need them. In this course, you will not only learn essential concepts but also build your vocabulary.You will find answers to all of your basic AWS question e.g. what is Cloud? What is AWS? What is AWS Core Services? What is the benefit of AWS? Why should you use it? in this course. In short, a perfect course if you are new to the cloud. You will learn about VPC, EC2, S3, RDS and other Cloud terminology in simple language.3. AWS EssentialsThis is the second part of the free AWS courses by LinuxAcademy on Udemy. If you have not read the first part, AWS Concepts then you should finish that first before joining this course, though it’s not mandatory. This course goes into little bit more details into AWS Core Services then previous one. It also has a lot of materials with around 50 lectures covering different cloud and AWS concepts. The course is divided into 14 section and each one covering a key AWS concept e.g. Identity and Access Management (IAM), Virtual Private Cloud (VPC), Simple Storage Service (S3), Elastic Compute Cloud (EC2), Database, Simple Notification Service (SNS), Auto Scaling, Route 53, Serverless Lambdas etc. In short, one of the most comprehensive AWS course which is also free. More than 70 thousand students have already enrolled in this course and learning AWS and I also highly recommend this one to anyone interested in Cloud and AWS.4. Learn Amazon Web Services (AWS): The Complete IntroductionThis is another useful and exciting free AWS course you will love to join on Udemy. In this course instructor Mike Chambers, an early adopter of Cloud and AWS explains the basics of Amazon Web Services. The course is also very hands-on, you will start up signing up to AWS, creating your account and then using the command line interface to control AWS. You will also learn to navigate around the AWS console, build Windows and Linux Servers and create a Wordpress website in 5 minutes which demonstrate how you can leverage Cloud for your database, server, and storage requirement. The course also teaches you how to build a simple AWS serverless system. The course not only focuses on AWS technology and terminology but also teaches you basics e.g. the true definition of Cloud Computing and How AWS fits into Cloud model. You will also get some realistic picture to find where is AWS located in the world. But, most importantly you will gain some hands-on experience in essential AWS services likeAWS S3 — Amazon Simple Storage ServiceAmazon Lambda — Function as a ServiceAWS EC2 — AWS Elastic Simple Computer ServiceIn short, one of the best free course to learn Amazon Web Service and Cloud Computing basics.5. Amazon Web Services (AWS) — Zero to HeroThis is another short but truly hands-on AWS course which will teach you how to perform a common task on the AWS interface. In just 2 hours time you will learn how to launch a Wordpress website based on Amazon EC2 service. You will also learn how to create a NodeJS based web application, sending an email with AWS SES, uploading a file to AWS S3, the storage solution provided by Amazon and finally, learn to create and connect to an AWS relational database server. In short, a great course if you want to use AWS for hosting your application or want to learn how you can leverage Cloud to host your application and most importantly its FREE.That’s all about some of the best free courses to learn Amazon Web Services or AWS. These are absolutely free courses on Udemy but you need to keep that in mind that sometime instructor converts their free course to paid course once they achieve their promotional target. This means you should check the price of the course before you join and if possible join early so that you can get the course free. Once you are enrolled in the course its free for life and you can learn at any time from anywhere. I generally join the course immediately even if I am not going to learn AWS currently. This way I can get access to the course and I can start learning once I have some time or priority changes. Other Free Programming Resources you may like to explore: 5 Free JavaScript Courses for Web Developers 5 Free Courses to Learn React JS for JavaScript Developers 5 Free Courses to Learn Core Spring, Spring Boot, and Spring MVC 5 Free Docker Courses for Java and DevOps Engineer 5 Courses to learn Maven And Jenkins for Java Developers 3 Books and Courses to Learn RESTful Web Services in Java 5 Courses to Learn Blockchain Technology for FREE 7 Free Selenium Webdriver courses for Java and C# developers 5 Free course to learn Servlet, JSP, and JDBC 5 Free TypeScript Courses for Web Developers 5 Free Big Data Courses to Learn Hadoop and SparkThanks for reading this article so far. If you like these AWS courses then please share with your friends and colleagues. If you have any questions or feedback then please drop a note.Top 5 Amazon Web Services or AWS Courses to Learn Online — FREE and Best of Lot was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Hackernoon

How to Ensure HIPAA Compliance Using AWS

HIPAA is the acronym for the Health Insurance Portability and Accountability Act of 1996. This act was created by the United States Congress in 1966, and is an amendment of both the Public Health Service Act (PHSA) and the Employee Retirement Income Security Act (ERISA). It is amended into the Internal Receive Code of 1996, and seeks to protect the health insurance coverage of individuals and groups.Image Credit: Amazon Web ServicesYou have 5 titles in HIPAA Compliance, of which Title-II is most applicable for healthcare app development with respect to patient data privacy and preventing healthcare fraud. Title-II HIPAA Compliance has AS or Administrative Simplification provision where national standards are set for electronic health care transactions and health insurance plans, and national identifiers for providers.It is mandatory for businesses to have HIPAA compliance, especially when it is engaged with patient data. This is specifically required for businesses involving the access to PHI, or Protected Health Information data. PHI information is released by entities that provide patients with the required information, as per their rights. AWS is powerful enough to bring compliance to the process, store and transmit PHI data in a secure manner.Noted features of HIPAAThe Health Insurance Portability and Accountability Act has set standards on how medical data should be shared among different healthcare systems. The idea is to protect critical patient data, prevent fraud of any kind and to ensure individual health care plans are portable, accessible and easily renewable. Other main features include:- Safe storage of patient medical information electronically- Establish national standards and increase efficiency, reduce administrative costs, etc.- Ensure criminal or civil penalties to those entities (health maintenance organizations, healthcare billing services, health insurers) that don’t comply with HIPAA standards.If the AWS environment is not HIPAA compliant, then all that protected data will go unprotected, falling into the hands of unauthorised individuals, thereby, making a violation of HIPAA rules.AWS Supports HIPAA ComplianceIn spite of the power of AWS, the fact is that a software service or cloud service can never be completely HIPAA compliant. The compliance is all about how well we know how to use it with AWS, and not merely about the services the platform provides. In this case, it is AWS.The platform helps you run sensitive workloads as per HIPAA. But in order to do that, you need to first accept AWS Business Associate Addendum or AWS BAA. Post this, you will be able to include PHI according HIPAA rules. This will give you access to the self service portal in the AWS portal, so you can review, accept and check status of your AWS BAA. It covers the security, control and administrative processes mentioned in the act.Just because AWS is HIPAA compliant, your data isn’t immune to hacks and if you leave the storage buckets (AWS S3 buckets) unprotected, you are clearly violating the rules of the act. Of course, the obvious solution would be limit access to the S3 buckets with PHI, but in spite of doing this, several healthcare organizations have been suspected of leaving their PHI open and vulnerable.To ensure the rules are followed to the letter, AWS published a 26-page guide for the healthcare organizations. The guide is called Architecting for HIPAA Security and Compliance for Amazon Web Services and aids business enterprises to set up access controls and secure AWS instances.Amazon S3 buckets are built to be secure by default. Only the resource owner with the administrator credentials will be able to access the information in normal cases. But mistakes and errors happen while configuring permissions to access the resources. This is called misconfiguring the Amazon S3 bucket. In such cases, the data will be accessible to anyone wanting to look for it. Location of the data will also be visible.Once the entity signs a BAA with AWS, they will be instructed on how to use the service, and when to use the access controls and permissions. In order to ensure that you don’t make mistakes while configuring S3 services, you can refer to their detailed documentation. This documentation would help you set up the access control and other permissions. There are multiple ways to add access and permissions, and this leads to multiple error points,where a tiny error can cost you dearly.Whenever there are unprotected S3 buckets and the PHI security gets weak, security researchers would note them and alert the concerned healthcare organizations. Unfortunately for you, it is not just the security researches that are watching your applications or data. Hackers and thieves are constantly on the lookout for weak points in the S3 buckets, and at the first instance, they would access the data and steal information.The weakest point in the S3 buckets is probably where user authentication is performed. This means, anyone with the required credentials can gain access to the data — and such a person would, naturally become an authenticated user. According to Amazon, an authenticated user is a person with an AWS account, and anyone with such an account will get access to the AWS account.We can use several AWS services for easily achieving HIPAA compliance. Here are some examples: AWS Parameter Store, AWS RDS, AWS VPC, AWS EC2, etc.AWS Parameter StoreAmazon Web Services come with its own Systems Manager that would let you configure and manage your own Amazon EC2 instances on a number of AWS resources, including virtual machines and on-premises servers. The Systems Manager comes with a unified interface that would help you easily centralize operational data, including finding and resolving problems, and automating most of the tasks.There is an AWS Systems Manager Parameter Store (SSM) that allows for a hierarchical storage for data management and secret management. The SSM store lets you create Secure String parameter, with both plain text parameter name and a secret parameter value. The Parameter Store can help encrypt and decrypt these parameter values through Secure String parameter. The entire exercise is to ensure that you can create, store and manage data containing parametrical values. You can then use these parameters in a number of applications and services, and you can configure their policies and permissions as well. In this manner, you don’t have to make errors while changing a single parameter value, because only the specific use parameter will be changed.The parameter store has hierarchical storage for the configuration data including database connection strings, passwords, license codes and so on. This is quite important for companies developing enterprise and small applications. Mission critical and secret information like database connection credentials and other highly critical data must be protected with the help of services like HIPAA compliant AWS because failing to do so would result in serious problems, especially if you are planning to merely embedding these things into the application code directly.The services provided by AWS Parameter Store are free of cost, can be scaled as and when required, and is entirely managed in the AWS cloud. You can store the data in any available format.AWS Parameter Store can be found under AWS Systems Manager service.AWS RDSThe Amazon Relational Database Service (RDS) is a Software-as-a-Service offering that helps you build, manage and scale relational databases in the cloud. The service can handle several kinds of standard database management tasks, and offers resizable capacity for industry-standard relational databases. The relational database is somewhat similar to MySQL and Oracle, hosted on Amazon infrastructure.Amazon RDS works with all AWS cloud products, and mostly works on the pay-as-you-go model as it is based on the conventional cloud utility computing model. Users are billed on the basis of conventional cloud utility computing model.Amazon RDS is useful among entities because:- It is gives access to functionalities of several Microsoft SQL, Oracle and MySQL databases.- It is compatible with applications and tools generally used by developers.- It helps users scale database, process resources and increase storage as per the user application demands.- It can be integrated with Simple DB, Amazon’s NoSQL database, containing relational and non-relational database needs.Amazon RDS database engines are HIPAA eligible. Hence you can use the RDS to build HIPAA compliant applications, store healthcare related information, including PHI under BAA with AWS, and covering the entire healthcare analytics pipeline. The compliance program was extended to include Amazon RDS for MariaDB and Amazon RDS for SQL Server.While architecting for HIPAA in the AWS, we recommend keeping the database part separate while using RDS. RDS service helps a lot for ensuring security, because that’s what is most important — protection of critical data.AWS VPCAccording to Wikipedia, Amazon Virtual Private Cloud or VPC “is a commercial cloud computing service that provides users a virtual private cloud, by “provision[ing] a logically isolated section of Amazon Web Services (AWS) Cloud”.This service is almost similar to that of private clouds like Open Stack and HPE Helion Eucalyptus, and closely resembles a traditional network. You can also enjoy the benefits of having a scalable infrastructure.VPC is dedicated to your AWS account, and logically isolated from other networks in the cloud. Being a networking layer of Amazon EC2, you can easily launch its resources into the VPC. You can enjoy complete control over your IP address range, configure network gateways, configure route tables, create subnets and so on. It is possible to maximize the security by having multiple layers of security, adding both IPv4 and IPv6 for high levels of security for accessing resources and applications. Keeping the database instances in the private subnets would keep it protected from the public intranet, thereby making it even more secure.Advanced features:- Enjoy advanced security features like security groups and network access control lists.- Both inbound and outbound filtering at instance level & subnet level- Store data in Amazon S3- Restrict access in Amazon S3 to make it accessible only from VPC- Dedicated instances possible so it can be accessed only for a single customer through additional isolation- Can be connected to other VPCs- Can connect to SaaS solutions through AWS Private Link- Secure connection to corporate datacenterAmazon EC2Amazon Elastic Compute Cloud is another web service provided in Amazon’s cloud computing platform. You can launch as many virtual servers as you need, manage storage, configure security and storage. You don’t have to invest in any kind of hardware to manage and deploy applications faster. It is possible to scale up and scale down, whenever there is a rise or fall in traffic.Image Credit: Amazon Web ServicesAmazon ECs comes with different instance (virtual computing environments) types, sizes and pricing plans for different computing requirements, so it is easy to come up with something that suits your budget.More about Architecting for HIPAA in the CloudAmazon Web Services helps you run sensitive workloads once they are regulated under United States HIPAA. To include highly protected health information under AWS, you need to accept the AWS Business Associate Addendum or AWS BAA. This is also where you can legally process PHI.However, for this to be accepted first, you need to use AWS Artifact Agreements and confirm your account in the AWS. This is used to review, accept and manage all the agreements in your account. Next, you can review the terms of your accepted agreement, and if you don’t feel the necessity to use the agreement, you can always terminate it. There is an AWS Artifact Agreements specifically for this.You can click on this link to check on the current list of services covered by AWS BAA.General Architecture StrategiesThere are some general strategies to follow when using AWS for HIPAA application. They are:- Identifying and separating protected and sensitive information from processing or orchestration- Performing automation to trace the flow of data- Set logical boundaries between protected information and general dataConclusionThe number of healthcare providers, IT professionals, insurers, and payers using AWS cloud based services to ensure high levels of protection in patient data and information is growing by the day. AWS aligns itself with the U.S. Health Insurance Portability and Accountability Act of 1996 (HIPAA) to promise its customers that the processing, maintenance and storage of Protected Health Information is done without errors or possibilities of vulnerabilities. This way you can be assured of HIPAA compliance while using AWS.Interested in building HIPAA compliant apps using AWS? We’ll be happy to help!Contact Us Today!Originally published at Cabot Solutions on November 27, 2018.How to Ensure HIPAA Compliance Using AWS was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Hackernoon

A crash course on serverless-side rendering with React.js, Next.js and AWS Lambda

Not so long ago I started exploring server-side rendered single-page applications. Yeah, try saying that three times fast. Building products for startups has taught me SEO is a must if you want an online presence. But, you also want the performance SPAs can provide.We want the best of both worlds. The SEO boost server-side rendering provides, and the speed of a Single Page Application. Today I’ll show you all this while hosting it basically for free in a serverless environment on AWS Lambda.TL;DRLet’s run through what this tutorial will cover. You can skim through and jump to the section that interest you. Or, be a nerd and keep reading. * whisper * Please be a nerd.What’re we building?Configure and install dependenciesBuild the app with the Serverless Framework and Next.jsDeploy the app to AWS LambdaNote: The code we will write is already on GitHub if you need further reference or miss any steps, feel free to check it out. The guys over at Cube.js gave me a quick rundown of React before I started writing this tutorial. They have a serverless analytics framework that plugs nicely into React. Feel free to give it a try.What’re we building?Well, a blazing-fast React application of course! The cost of every SPA is lousy SEO capabilities though. So we need to build the app in a way to incorporate server-side rendering. Sounds simple enough. We can use Next.js, a lightweight framework for static and server-rendered React.js applications.To accomplish this we need to spin up a simple Express server and configure the Next app to serve files through Express. It is way simpler than it sounds.However, from the title, you can assume we don’t like the word server in my neighborhood. The solution is to deploy this whole application to AWS Lambda! It is a tiny Node.js instance after all.Ready? Let’s get crackin’!Configure and install dependenciesAs always, we’re starting with the boring part, setting up the project and installing dependencies.1. Install the Serverless FrameworkIn order for serverless development to not be absolute torture, go ahead and install the Serverless framework.$ npm i -g serverlessNote: If you’re using Linux or Mac, you may need to run the command as sudo.Once installed globally on your machine, the commands will be available to you from wherever in the terminal. But for it to communicate with your AWS account you need to configure an IAM User. Jump over here for the explanation, then come back and run the command below, with the provided keys.$ serverless config credentials \ --provider aws \ --key xxxxxxxxxxxxxx \ --secret xxxxxxxxxxxxxxNow your Serverless installation knows what account to connect to when you run any terminal command. Let’s jump in and see it in action.2. Create a serviceCreate a new directory to house your Serverless application services. Fire up a terminal in there. Now you’re ready to create a new service.What’s a service you ask? View it as a project. But not really. It’s where you define AWS Lambda functions, the events that trigger them and any AWS infrastructure resources they require, all in a file called serverless.yml.Back in your terminal type:$ serverless create --template aws-nodejs --path ssr-react-nextThe create command will create a new service. Shocker! But here’s the fun part. We need to pick a runtime for the function. This is called the template. Passing in aws-nodejs will set the runtime to Node.js. Just what we want. The path will create a folder for the service.3. Install npm modulesChange into the ssr-react-next folder in your terminal. There should be three files in there, but for now, let’s first initialize npm.$ npm init -yAfter the package.json file is created, you can install a few dependencies.$ npm i \ axios \ express \ serverless-http \ serverless-apigw-binary \ next \ react \ react-dom \ path-match \ url \ serverless-domain-managerThese are our production dependencies, and I’ll go into more detail explaining what they do a bit further down. The last one, called serverless-domain-manager will let us tie a domain to our endpoints. Sweet!Now, your package.json should look something like this.https://medium.com/media/3308dbde18a584d0762a6fbf4a9ec641/hrefWe also need to add two scripts, one for building and one for deploying the app. You can see them in the scripts section of the package.json.4. Configure the serverless.yml fileMoving on, let’s finally open up the project in a code editor. Check out the serverless.yml file, it contains all the configuration settings for this service. Here you specify both general configuration settings and per function settings. Your serverless.yml will be full of boilerplate code and comments. Feel free to delete it all and paste this in.https://medium.com/media/7dc03ccd1b9b1a56b015f5cf5bb3070d/hrefThe functions property lists all the functions in the service. We will only need one function because it will run the Next app and render the React pages. It works by spinning up a tiny Express server, running the Next renderer alongside the Express router and passing the server to the serverless-http module.In turn, this will bundle the whole Express app into a single lambda function and tie it to an API Gateway endpoint. Under the functions property, you can see a server function that will have a handler named server in the index.js file. API Gateway will proxy any and every request to the internal Express router which will then tell Next to render our React.js pages. Woah, that sounds complicated! But it's really not. Once we start writing the code you'll see how simple it really is.We’ve also added two plugins, the serverless-apigw-binary for letting more mime types pass through API Gateway and the serverless-domain-manager which lets us hook up domain names to our endpoints effortlessly.We also have a custom section at the bottom. The secrets property acts as a way to safely load environment variables into our service. They're later referenced by using ${self:custom.secrets.<environment_var>} where the actual values are kept in a simple file called secrets.json.Apart from that, we’re also letting the API Gateway binary plugin know we want to let all types through, and setting a custom domain for our endpoint.That’s it for the configuration, let’s add the secrets.json file.5. Add the secrets fileAdd a secrets.json file and paste this in. This will keep us from pushing secret keys to GitHub.https://medium.com/media/b5aea593cee11f74c9986948e1b8a1d4/hrefNow, only by changing these values you can deploy different environments to different stages and domains. Pretty cool.Build the app with the Serverless Framework and Next.jsTo build a server-side rendered React.js app we’ll use the Next.js framework. It lets you focus on writing the app instead of worrying about SEO. It works by rendering the JavaScript before sending it to the client. Once it’s loaded on the client side, it’ll cache it and serve it from there instead. You have to love the speed of it!Let’s start by writing the Next.js setup on the server.1. Setting up the Next.js server(less)-side renderingCreate a file named server.js. Really intuitive, I know.https://medium.com/media/89558f7ccc3c294784723816e539e755/hrefIt’s pretty simple. We’re grabbing Express and Next, creating a static route with express.static and passing it the directory of the bundled JavaScript that Next will create. The path is /_next, and it points to the .next folder.We’ll also set up the server-side routes and add a catch-all route for the client-side renderer.Now, the app needs to be hooked up to serverless-http and exported as a lambda function. Create an index.js file and paste this in.https://medium.com/media/43811f98cd002d6e3ab2d3701fd5e6b1/hrefAs you can see we also need to create binaryMimeTypes.js file to hold all the mime types we want to enable. It'll just a simple array which we pass into the serverless-http module.https://medium.com/media/0e1de48d847ca80032a829d54e839435/hrefSweet, that’s it regarding the Next.js setup. Let’s jump into the client-side code!2. Writing client-side React.jsIn the root of your project create three folders named, components, layouts, pages. Once inside the layouts folder, create a new file with the name default.js, and paste this in.https://medium.com/media/45c919706417cbaef2ff7e910b4fb972/hrefThe default view will have a <Meta /> component for setting the metatags dynamically and a <Navbar /> component. The { children } will be rendered from the component that uses this layout.Now add two more files. A navbar.js and a meta.js file in the components folder.https://medium.com/media/5a8820581759fdca8611de4ab95b1905/hrefThis is an incredibly simple navigation that’ll be used to navigate between some cute dogs. It’ll make sense once we add something to the pages folder.https://medium.com/media/4c8a9b27b9adb0472f773f03c9a0668e/hrefThe meta.js will make it easier for us to inject values into our meta tags. Now you can go ahead and create an index.js file in the pages folder. Paste in the code below.https://medium.com/media/b130b16e94c2a84b1d8f748c9f1c8842/hrefThe index.js file will be rendered on the root path of our app. It calls a dog API and will show a picture of a cute dog.Let’s create more routes. Create a sub-folder called dogs and create an index.js file and a _breed.js file in there. The index.js will be rendered at the /dogs route while the _breed.js will be rendered at /dogs/:breed where the :breed represents a route parameter.Add this to the index.js in the dogs directory.https://medium.com/media/5285daba41158e18fa3df6d8f6d85f23/hrefAnd, another snippet in the _breed.js file in the dogs folder.https://medium.com/media/a994bd3bd61ace9cfddb1a07b9cfe913/hrefAs you can see in the Default component we're injecting custom meta tags. It will add custom fields in the <head> of your page, giving it proper SEO support!Note: If you’re stuck, here’s what the code looks like in the repo.Let’s deploy it and see if it works.Deploy the app to AWS LambdaAt the very beginning, we added a script to our package.json called deploy. It'll build the Next app and deploy the serverless service as we specified in the serverless.yml.All you need to do is run:$ npm run deployThe terminal will return output with the endpoint for your app. We also need to add the domain for it to work properly. We’ve already added the configuration in the serverless.yml but there's one more command we need to run.$ sls create_domainThis will create a CloudFront distribution and hook it up to your domain. Make sure that you’ve added the certificates to your AWS account. It usually takes around 20 minutes for AWS to provision a new distribution. Rest your eyes for a moment.Once you’re back, go ahead and deploy it all again.$ npm run deployIt should now be tied up to your domain. Here’s what it should look like.Nice! The app is up-and-running. Go ahead and try it out.Wrapping upThis walkthrough was a rollercoaster of emotions! It gives you a new perspective into creating fast and performant single-page apps while at the same time keeping the SEO capabilities of server-rendered apps. However, with a catch. There are no servers you need to worry about. It’s all running in a serverless environment on AWS Lambda. It’s easy to deploy and scales automatically. Doesn’t get any better.If you got stuck anywhere take a look at the GitHub repo for further reference, and feel free to give it a star if you want more people to see it on GitHub.If you want to read some of my previous serverless musings head over to my profile or join my newsletter!https://medium.com/media/a80131fe3b67db34ff64d435f0cc9039/hrefOr, take a look at a few of my articles right away:A crash course on Serverless with AWS — Building APIs with Lambda and Aurora ServerlessA crash course on Serverless with AWS — Image resize on-the-fly with Lambda and S3A crash course on Serverless with AWS — Triggering Lambda with SNS MessagingA crash course on serverless-side rendering with Vue.js, Nuxt.js and AWS LambdaBuilding a serverless contact form with AWS Lambda and AWS SESA crash course on Serverless APIs with Express and MongoDBSolving invisible scaling issues with Serverless and MongoDBHow to deploy a Node.js application to AWS Lambda using ServerlessGetting started with AWS Lambda and Node.jsA crash course on securing Serverless APIs with JSON web tokensMigrating your Node.js REST API to ServerlessBuilding a Serverless REST API with Node.js and MongoDBA crash course on Serverless with Node.jsI also highly recommend checking out this article about Next.js, and this tutorial about the serverless domain manager.Hope you guys and girls enjoyed reading this as much as I enjoyed writing it. If you liked it, slap that tiny clap so more people here on HackerNoon will see this tutorial. Until next time, be curious and have fun.Originally published at dev.to.A crash course on serverless-side rendering with React.js, Next.js and AWS Lambda was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Hackernoon
More news sources

AWS news by Finrazor

Trending

Hot news

Hot world news

Ripple’s RippleNet XRP Showcases Real-World Effectiveness: Mercury FX

After partnering up with the firm behind the second largest coin XRP [XRP] Ripple as one of the +200 costumers, Mercury FX announced via their official twitter handle that they transacted their largest payment across RippleNet with a positive conclusion. 1/1 We've made our largest payments across RippleNet using #XRP – 86,633.00 pesos (£3,521.67) from the U.K. to Mexico in seconds. pic.twitter.com/WsHJuZTiOy — Mercury-fx Ltd (@mercury_fx_ltd) January 17, 2019 Using XRP, the firm transferred £3,521.67 or $4,552.41 while they cited that UK based Mustard Foods was able to save £79.17 and 31 hours on the transaction. Mustard Foods could be one of the best examples of the impact of using RippleNet could have as it opened doors to cheaper expenses, quicker orders and faster payments. As covered by John P. Njui on EWN a few days ago, The Ripple company has announced via its website that 13 new financial institutions have joined RippleNet thus propelling the number of total global customers to over 200. RippleNet currently operates in 40 countries across 6 continents. Out of the 13 aforementioned financial institutions, 5 are confirmed as using XRP to source instant liquidity for their cross border payments. The are JNFX, SendFriend, Transpaygo, FTCS and Euro Exim Bank. By the end of this year [2018], major banks will use xRapid as a liquidity tool. By the end of next year [2019], I would certainly hope that we will see…in the order of magnitude…of dozens. But we also need to continue to grow that ecosystem…grow the liquidity. – Brad Garlinghouse The success behind the team from Ripple could be standing by their marketing strategy and future plans of making the financial industry a better place to be. While not displacing traditional banking systems but helping them make payments cheaper and faster, it is finding its way to take spotlight in the crypto-verse. The post Ripple’s RippleNet XRP Showcases Real-World Effectiveness: Mercury FX appeared first on Ethereum World News.
Ethereum World News

BRD Wallet Expands Crypto User Access Across Europe With Coinify Partnership

Coinify, a European-based financial platform that provides a wallet, trading and payment processing solution, has announced that they are integrating BRD Wallet into their platform to deliver BRD wallet access to users across the European region.Specifically, the partnership provides access to virtual currencies, like bitcoin, to 34 countries across the Single Euro Payments Area (SEPA). The SEPA region is a collection of member states in Europe who are part of a payment system that simplifies bank transfers denominated in EUR. The launch is also enabled largely in part by Coinify’s newly rebranded trading solution for wallet partners.Customers will now be able to use BRD Wallet to “purchase bitcoin at cost-efficient rates with SEPA bank transfers” within Coinify’s trading platform. With BRD integration, customers will also retain control over their private keys while using Coinify.Essentially, this provides a large number of users with an efficient and secure way to buy bitcoin and other cryptocurrencies, and then allows them to immediately store it in a manner where they control what happens to their money. Typically, a user will entrust the custody of their private keys to a centralized exchange while they are waiting for trades to be executed and sometimes for much longer than that.Aaron Lasher, co-founder and chief strategy officer at BRD, highlighted the advantages of the integration for security-focused users of the Coinify platform.“We like exchanges and think security will get better in the future, but by using our integrated purchase and trading solutions, you get to keep your funds under your control 99 percent of the time, and only put them at a slightly higher risk for a short period when you make the exchange,” Lasher told Bitcoin Magazine.“Using a non-custodial wallet means that you and you alone control your funds. It’s similar to having physical cash in a (highly secure) safe at home. Only in this case, we provide our customers a digital safe (the BRD wallet) that they can keep in their pocket and carry along. Nobody else in the world has access to your funds but you, and nobody can stop you from sending or receiving funds.”Integrating a wallet that allows users to own their funds and seamlessly make trades on a platform like Coinify could help to push bitcoin adoption forward."The financial industry is ripe for disruption and we see bitcoin and the other virtual currencies as the future of payments,” said Rikke Stær, chief commercial officer at Coinify, told Bitcoin Magazine. “At Coinify, we have experienced first-hand the rising adoption of bitcoin and working with BRD as a user-friendly, decentralized wallet will only encourage the global reach of the currency."“Since launching as the first iOS bitcoin wallet in the App Store over 4 years ago, we’ve grown tremendously in North America,“ Adam Traidman, CEO and co-founder of BRD, said in a statement. “Europe will be strategic in the next phase of BRD’s global growth, and the partnership with Coinify will ensure our success in this crucial endeavour.”In August 2018, Canadian-based Coinberry exchange launched a similar BRD integration, allowing users to quickly and seamlessly buy, deposit and withdraw bitcoin on the Coinberry platform, while keeping control of their keys at all times. This article originally appeared on Bitcoin Magazine.
Bitcoin Magazine

Crypto Payments Service BitPay Reports It Saw Over $1 Billion in Transactions in 2018

Crypto Payments Service BitPay Reports It Saw Over $1 Billion in Transactions in 2018 Major cryptocurrency payment service provider BitPay has reported $1 billion in transactions this past year, according to a press release Jan. 16. According to the report, the company also set a new record for itself in terms of transaction fee revenue. […] Cet article Crypto Payments Service BitPay Reports It Saw Over $1 Billion in Transactions in 2018 est apparu en premier sur Bitcoin Central.
Bitcoin Central
By continuing to browse, you agree to the use of cookies. Read Privacy Policy to know more or withdraw your consent.