AWS news

World latest news

Just how expensive is the full AWS SDK?

If you’re not familiar with how cold start works within the context of AWS Lambda, then read this post first.When a Node.js Lambda function cold starts, a number of things happen:the Lambda service has to find a server with enough capacity to host the new containerthe new container is initializedthe Node.js runtime is initializedyour handler module is initialized, which includes initializing any global variables and functions you declare outside the handler functionIf you enable active tracing for a Lambda function, you will be able to see how much time is spent on these steps in X-Ray. Unfortunately, the time it takes to initialize the container and the Node.js runtime are not recorded as segments. But you can work out from the difference in durations.Here, Initialization refers to the time it takes to initialize the handler module.The above trace is for the function below, which requires the AWS SDK and nothing else. As you can see, this simple require added 147ms to the cold start.const AWS = require('aws-sdk')module.exports.handler = async () => {}Consider this the cost of doing business when your function needs to interact with AWS resources. But, if you only need to interact with one service (e.g. DynamoDB), you can save some initialization time with this one-liner.const DynamoDB = require('aws-sdk/clients/dynamodb')const documentClient = new DynamoDB.DocumentClient()It requires the DynamoDB client directly without initializing the whole AWS SDK. I ran an experiment to see how much cold start time you can save with this simple change.Credit goes to my colleague Justin Caldicott for piquing my interest and doing a lot of the initial analysis.In addition to the AWS SDK, we often require the XRay SDK too and use it to auto-instrument the AWS SDK. Unfortunately, the aws-xray-sdk package also has some additional baggages that we don’t need. By default it supports Express.js apps, MySQL and Postgres. If you are only interested in instrumenting the AWS SDK and http/https modules then you only need the aws-xray-sdk-core.MethodologyI tested a number of configurations:no AWS SDKrequiring only the DynamoDB clientrequiring the full AWS SDKrequiring the XRay SDK only (no AWS SDK)requiring the XRay SDK and instrumenting the AWS SDKrequiring the XRay SDK Core and instrumenting the AWS SDKrequiring the XRay SDK Core and instrumenting only the DynamoDB clientEach of these functions are traced by X-Ray. Sample rate set to 100% so we don’t miss anything. We are only interested in the duration of the Initialization segment as it corresponds to the time for initializing these dependencies.The no AWS SDK case is our control group. We can see how much time each additional dependency adds to our Initialization duration.To collect a statistically significant sample set of data, I decided to automate the process using Step Functions.The state machine takes an input { functionName, count }.The SetStartTime step adds the current UTC timestamp to the execution state. This is necessary as we need the start time of the experiment to fetch the relevant traces from X-Ray.The Loop step triggers the desired number of cold starts for the specified function. To trigger cold starts, I programmatically updates an environment variable before invoking the function. That way, I ensure that every invocation is a cold start.The Wait30Seconds step makes sure that all the traces are published to XRay before we attempt to analyze them.The Analyze step fetches all the relevant traces in XRay and outputs several statistics around the Initialization duration.Each configuration is tested over 1000 cold starts. Occasionally the XRay traces are incomplete (see below). These incomplete traces are excluded in the Analyze step.where is the AWS::Lambda:Function segment?Each configuration is also tested with WebPack as well (using the serverless-webpack plugin). Thanks to Erez Rokah for the suggestion.The ResultsThese are the Initialization time for all the test cases.Key observations:WebPack improves the Initialization time across the board.Without any dependencies, Initialization time averages only 1.72ms without WebPack and 0.97ms with WebPack.Adding AWS SDK as the only dependency adds an average of 245ms without WebPack. This is fairly significant. Adding WebPack doesn’t improve things significantly either.Requiring only the DynamoDB client (the one-liner change discussed earlier) saves up to 176ms! In 90% of the cases, the saving was over 130ms. With WebPack, the saving is even more dramatic.The cost of requiring the XRay SDK is about the same as AWS SDK.There’s no statistically significant difference between using the full XRay SDK and XRay SDK Core. With or without WebPack.Hi, my name is Yan Cui. I’m an AWS Serverless Hero and the author of Production-Ready Serverless. I have run production workload at scale in AWS for nearly 10 years and I have been an architect or principal engineer with a variety of industries ranging from banking, e-commerce, sports streaming to mobile gaming. I currently work as an independent consultant focused on AWS and serverless.You can contact me via Email, Twitter and LinkedIn.Come learn about operational BEST PRACTICES for AWS Lambda: CI/CD, testing & debugging functions locally, logging, monitoring, distributed tracing, canary deployments, config management, authentication & authorization, VPC, security, error handling, and more.Get your copy here.Just how expensive is the full AWS SDK? was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

AWS Landing Zone Solution -Accelerating Cloud Adoption

Amazon Web Services (AWS) is an extensive cloud service platform by Amazon that extends database storage, computing power, content delivery, etc. helping businesses grow. AWS assist companies with a myriad of tasks including data processing, warehousing, game development and a lot more. Owing to its unique offering, the popularity of AWS continues to grow as it generated net sales revenue of a whopping USD 232 Billion in the year 2018.Challenges of Creating AWS AccountCreating an AWS account is a strenuous task to accomplish as it involves multiple steps that require manual authorization. Additionally, there are various other factors to take into consideration like the requirement of IAM (Identity and Access Management) account, logging accounts, handling of cross-account permissions and much more.AWS Landing Zone — The SolutionTo streamline this complex process, AWS has offered a turnkey solution, denoted as the AWS Landing Zone that completely automates the creation of a secure and efficient multi-account environment on AWS.Simplifying the AWS Account Creation ProcessThe AWS Landing Zone solution aims at helping customers set up a secure multi-account AWS environment while adhering to the best of AWS practices. AWS Landing Solution automates the set-up of the AWS environment for running secure and scalable workloads while implementing an initial security baseline. It also offers a baseline environment to start a multi-account architecture, identity, and access management, data security, governance, network design, and logging.Types of AWS AccountEssentially there are four types of AWS accounts that can be deployed through the AWS service catalogAWS Organization AccountAWS Organization Account is used to efficiently manage configuration and access to the AWS Landing Zone. It basically offers the ability to create and manage member accounts. With the deployment of AWS Landing Zone in the AWS organization account, users can avail features such as Amazon Simple Storage Service, account configuration StackSets, AWS Single Sign-on (SSO) configuration, AWS Organisations Service Control Policies, etc.(source: Services AccountThe Shared Services Account is used to create infrastructure shared services. These accounts by default host AWS managed active directories for AWS SSO integration within a Shared Amazon Virtual Private Cloud (Amazon VPC). The Amazon VPC is capable of automatically pairing with new AWS accounts created within the Account Vending Machine (AVM).Log Archive AccountThis account includes a central Amazon S3 bucket for securing copies of AWS CloudTrail and AWS Config log files within a log archive account. The access to this account is typically restricted to the auditors and security team for forensic investigations associated with account activities.Security AccountIt creates the role of the auditor (read-only) and administrator (full-access) to all AWS Landing Zone managed accounts. These accounts are used by the security and compliance team of the company to audit and perform emergency security operations in case of discrepancies.Account Vending Machine ArchitectureAccount Vending Machine (AVM) is one of the key components of the AWS Landing Zone. It is used as an AWS Service Catalogue product that enables users to generate new AWS accounts in Organizational Units (OUs) preconfigured with an account security baseline and a predefined network.Account Set-up Using AWS Landing Zone:InstallingThe entire setup process is executed through CloudFormation, also denoted as an ‘initiation template’ that allows users to select basic settings. Additionally, the initialization process writes a config template to an S3 Bucket, which serves as a source for CodePipeline. The CodePipeline integrates every change made to the config and applies the same to the main infrastructure.(source:, to test the installation process, the user needs to change the ‘BuildLandingZones’ parameter to ‘False’ in order to prevent the configuration from launching immediately. This allows the user to inspect the final config and make necessary modifications before running it. Moreover, change the ‘LockStackSetExecutionRole’ to ‘False’ to ensure access to sub-accounts. However, care must be taken to change the parameter back to ‘True’, or else, it restricts the administrator access in sub-accounts.Creating AccountOnce the AWS Landing Zone has been successfully set up, users can easily create new AWS accounts via the ‘AWS-Landing-Zone-Account-Vending-Machine’ in the AWS Service Catalogue. Once AVM is launched, the user needs to add the preferred name to the account and choose the appropriate AVM version.AWS Landing Zone — Easing the Cloud Adoption ProcessFuss Free Cloud AdoptionAWS Landing Zone allows users to create various interconnected and structured accounts seamlessly, thereby saving up significant time by accelerating the transition process to a cloud platform.2. Flexible and ScalableWith the AWS Landing Zone, users obtain a consistent underlying platform, allowing them to efficiently develop their cloud set-up. A consistent base platform also makes it easier for users to reuse the codes in order to modify their cloud platform in the future.3. Secure and Compliant InfrastructureAWS Landing Zone adheres to the best practices of AWS. This ensures that security, governance and compliance requirements are embedded into all landing zone accounts, by default. Furthermore, AWS offers optimized security through its account baseline settings including:-AWS CloudTrail — It is created within each account and configured to send logs to a centrally operated Amazon Simple Storage (Amazon S3) bucket and AWS CloudWatch Logs.AWS Config — It stores account configuration log files within a centrally managed Amazon S3 bucket in the log archive account.AWS Config Rules — It allows monitoring of storage encryption, root account multi-factor authentication (MFA), AWS Identity and Access Management (IAM) password policy, insecure security group rules, and Amazon S3 public read and writes.AWS Identity and Access Management (IAM) — It is used to configure the IAM password policy.Cross-Account Access — It is used to configure audit and emergency security administrative access and emergency security administrative access to AWS Landing Zone accounts.Amazon Virtual Private Cloud (VPC) — It configures the initial network for an account including depleting the default VPC, deploying the AVM requested network type, and network peering with the Shared Services.AWS Landing Zone Notifications — Amazon CloudWatch alarms and events are configured to send notification on root account login, API authentication failures, and console sign-in failures.Amazon Guard Duty — It is configured to setup automatic threat detection.AWS Landing Zone — Facilitating a Strong Foundation for Multi-Account StructureAWS Landing Zone extends an effective and efficient way to build a multi-account structure that comes with security and governance. It also offers a stable foundation for AWS that facilitates seamless modifications, which enables users to spend less time worrying about migrating their resources, thereby allowing them to invest more time in generating valuable outputs on the cloud platform.Want to test it on your own?You can test AWS Landing Zone yourself. We recommend that you first run your tests in a newly created account with a new organization so that you can gain some experience in a sandbox environment. Landing Zone itself comes as a CloudFormation template and can, therefore, be installed with one click:Here are AWS’ docs:AWS Landing Zone Developer GuideAWS Landing Zone Implementation GuideAWS Landing Zone User GuideAWS Landing Zone Solution -Accelerating Cloud Adoption was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Getting down and dirty with metric-based alerting for AWS Lambda

The phrase “better safe than sorry” gets thrown around whenever people talk about monitoring or getting observability into your AWS resources but the truth is that you can’t sit around and wait until a problem arises, you need to proactively look for opportunities to improve your application in order to stay one step ahead of the competition. Setting up alerts that go off whenever a particular event happens is a great way to keep tabs on what’s going on behind the scenes of your serverless applications and this is exactly what I’d like to tackle in this article.AWS Lambda MetricsAWS Lambda is monitoring functions for you automatically, while it reports metrics through the Amazon CloudWatch. The metrics we speak of consist of total invocations, throttles, duration, error, DLQ errors, etc. You should consider CloudWatch as a metrics repository, being that metrics are the basic concept in CloudWatch and they represent a set of data points which are time-ordered. Metrics are defined by name, one or even more dimensions, as well as a namespace. Every data point has an optional unit of measure and a time stamp.And while Cloudwatch is a good tool to get the metrics of your functions, Dashbird takes it up a notch by providing that missing link that you’d need in order to properly debug those pesky Lambda issues. It allows you to detect any kinds of failures within all programming languages supported by the platform. This includes crashes, configuration errors, timeouts, early exits, etc. Another quite valuable thing that Dashbird offers is Error Aggregation that allows you to see immediate metrics about errors, memory utilization, duration, invocations as well as code execution.AWS Lambda metrics explainedBefore we jump in I feel like we should discuss the metrics themselves to make sure we all understand and know what every term means or what they refer too.From there, we’ll take a peek at some of the namespace metrics inside the AWS Lambda, and we’ll explain how do they operate. For exampleInvocations will calculate the number of times a function has been invoked in response to invocation API call or to an event which substitutes the RequestCount metric. All of this includes the successful and failed invocations, but it doesn’t include the throttled attempts. You should note that AWS Lambda will send mentioned metrics to CloudWatch only if their value is at the point of nonzero.Errors will primarily measure the number of failed invocations that happened because of the errors in the function itself which is a substitution for ErrorCount metric. Failed invocations are able to start a retry attempt which can be successful.There are limitations we must mention:Doesn’t include the invocations that have failed because of the invocation rates exceeded the concurrent limits which were set by default (429 error code).Doesn’t include failures that occurred because of the internal service errors (500 error code).DeadLetterErrors can start a discrete increase in numbers when Lambda is not able to write the failed payload event to your pre-configured DeadLetter lines. This incursion could happen due to permission errors, misconfigured resources, timeouts or even because of the throttles from downstream services.Duration will measure the real-time beginning when the function code starts performing as a result of an invocation up until it stops executing. The duration will be rounded up to closest 100 milliseconds for billing. It’s notable that AWS Lambda sends these metrics to CloudWatch only if the value is nonzero.Throttles will calculate the number of times a Lambda function has attempted an invocation and were throttled by the invocation rates that exceed the users’ concurrent limit (429 error code). You should also be aware that the failed invocations may trigger retry attempts automatically which can be successful.Iterator Age is used for stream-based invocations only. These functions are triggered by one of the two streams: Amazon’s DynamoDB or Kinesis stream. Measuring the age of the last record for every batch of record processed. Age is the sole difference from the time Lambda receives the batch and the time the last record from the batch was written into the stream.Concurrent Executions are basically an aggregate metric system for all functions inside the account, as well as for all other functions with a custom concurrent pre-set limit. Concurrent executions are not applicable for different forms and versions of functions. Basically, this means that it measures the sum of concurrent executions in a particular function from a certain point in time. It is crucial for it to be viewed as an average metric considering its aggregated across the time period.Unreserved Concurrent Executions are almost the same as Concurrent Executions, but they represent the sum of the concurrency of the functions that don’t have custom concurrent limits specified. They apply only to the user’s account, and they need to be viewed as an average metric if they’re aggregated across the period of time.Where do you start?CloudwatchIn order to access the metrics using the CloudWatch console, you should open the console and in the navigational panel choose the metrics option. Furthermore, in the CloudWatch Metrics by Category panel, you should select the Lambda Metrics option.DashbirdTo access your metrics you need to log in the app and the first screen will show you a bird’s eye view of all the important stats of your functions. From cost, invocations, memory utilization, function duration as well as errors. Everything is conveniently packed on to a single screen.Setting Up Metric Based Alarms For Lambda FunctionsIt is essential to set up alarms that will notify you when your Lambda function ends up with an error, so you’ll be able to react proficiently.CloudwatchTo set up an alarm for failed function (can be caused by the fall of the entire website or even an error in the code) you should go to the CloudWatch console, choose Alarms on your left and click Create Alarm. Choose the “Lambda Metrics,” and from there, you should look for your Lambda name in the list. From there, check the box of a row where the metric name is “Error.” Then just click Next.Now, you’ll be able to put a name and a description for the alarm. From here, you should set up the alarm to be triggered every time “Errors” are over 0, for one continuous period. As the Statistic, select the “sum” and the minutes required for your particular case in the dropdown “Period” window.Inside the Notification box, choose the “select notification list” in a dropdown menu and choose your SNS endpoint. The last step in this setup is to click the “Create Alarm” button.DashbirdSetting metric-based alerts with Dashbird is not as complicated, in fact it’s quite the opposite. While in the app, go to the Alerts menu and click on the add button on the right side of your screen and give it a name. After that you select the metric you are interested in, which can either be a coldstart, retry, invocation and of course error. All you have to do is select the rules (eg: whenever the number of coldstards are over 5 in a 10-minute window alert me) and you are done.How do you pick the right solution for your metric based alerts?Though question. While Cloudwatch is a great tool, the second you have more lambdas in your system you’ll find it very hard to debug or even understand your errors due to the large volume of information. Dashbird, on the other hand, offers details about your invocations and errors that are simple and concise and have a lot more flexibility when it comes to customization. My colleague Renato made a simple table that compares the two services.I’d be remiss not to make an observation: with AWS CloudWatch whenever a function is invoked, they spin up a micro-container to serve the request and open a log stream in CloudWatch for it. They re-use the same log stream as long as this container remains alive. This means the same log stream gets logs from multiple invocations in one place.This quickly gets very messy and it’s hard to debug issues because you need to open the latest log stream and browse all the way down to the latest invocations logs While in Dashbird we show individual invocations ordered by time which makes it a lot easier for developers to understand what’s going on at any point in time.Have anything useful to add? Please do so in the comment box below.Originally published at on February 20, 2019.Getting down and dirty with metric-based alerting for AWS Lambda was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Multi-chain operating system Quant Network becomes Amazon AWS Partner

CryptoNinjas Quant Network, the creator of Overledger, a platform that facilitates the development of decentralized, multi-chain applications, today announced that it has achieved Technology Partner status with Amazon’s AWS Partner Network (APN). The move will enable more than a million active customers to benefit... Multi-chain operating system Quant Network becomes Amazon AWS Partner

The AWS Serverless Tools You Can’t Live Without

The Serverless Tools You Can’t Live WithoutSince day one of the serverless revolution there has been a consistent criticism that Its “not production ready”. Whilst this was arguably valid in the early days, the scaffolding that supports a serverless approach has matured over the last few years to a point where this is no longer true.We all have our own favourites, but here’s a collection to get you primed:1. cloudflareYou may think that a CDN is a weird choice for a serverless toolset, but when you’re paying per invocation, the underlying DoS protection, firewall and WAF setup can be a lifesaver. I like cloudflare for its page rules and advanced caching to take even further load of our architecture and costs.Arguably their workers at the edge is in itself as serverless framework, but for simplicity we’ll focus more on the backend in this article vs the “edge” offerings that are starting to appear.Top tip: When setting up caching via page rules, checkout the Origin Cache Control feature which lets you setup the caching instruction in your code and cloudflare will follow its lead.2. serverless.comDeployment and provisioning has come a long way and serverless is arguably at the forefront of that movement. With support for multiple vendors and languages, deploying code with serverless is as simple as authoring a yaml file. You simply choose what code you want to expose and what events it should respond to and serverless does the rest … as simple as serverless deployTop tip: If you’re using VS Code, look out for the serverless framework snippets.3. LumigoLumigo gives an end to end debugging platform you can’t find elsewhere. Most monitoring platforms (such as AWS X-Ray) fall down when traversing services such as SNS or SQS … lumigo intelligently maps your stack and can track traffic from Lambda to Dynamo via SQS .. and back.Top tip: Lumigo is great for production environments, but can be a godsend during development so get it installed sooner rather than later.4. Dynamo dBA key sticking point in the early days of serverless was the lack of pay per use / serverless dB providers … essentially leading to an architecture whereby your code could scale as and when needed, but your database would still be statically provisioned. At best you would be over provisioned (and wasting money), at worst you would be under provisioned when a spike in activity hit (and suffer downtime).Those days are behind us with services such as dynamo dB (noSQL), serverless Aurora (SQL) and Google Firebase now mainstream and battle tested. mongo Atlas ( is a worthy mention for those invested in mongo but as a DBaaS it’s still vulnerable to the provisioning issues mentioned above.Top tip: utilise the vendor’s monitoring tools and set your alarms around throttling. Then day by day, week by week reduce your capacity until you get alerts. Once you know your level, add 20% for redundancy and you should have a fairly optimised platform.5. Docker LambdaReplicating your environment locally gives an extra piece of mind before deployment and these docker images give exactly that. Its the closest you can get to live as it replicates the AWS Lambda environment almost identically — including installed software and libraries, file structure and permissions, environment variables, context objects and behaviors — even the user and running process are the same.Top tip: get this setup nice and early and you can benefit from an offline environment … perfect for the digital nomads amongst us.In truth, we could have listed 101 supporting services that we rely on everyday, from twilio / sendgrid and cloudwatch to pureSec. The ecosystem is growing quickly and there is a lot of great offerings coming to market. If there’s one we missed that you think is worthy of a mention please let me know.Final word: Anaibol maintains a staggering list of serverless tools which is well worth a read .. you can see it here: AWS Serverless Tools You Can’t Live Without was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
More news sources

AWS news by Finrazor


Hot news

Hot world news

Maximine Coin Surge, eToro Adds TRON, Rakuten and Yahoo, Boss Crypto - Cryptocurrency News

Maximine coin is up more than 700% within the past 30 days. eToro also announced yesterday that they will add Tron TRX to its platform with more than 10 million registered users. Mattie will also talk about Rakuten and Yahoo continuing Mainstream Progression Towards Cryptocurrency as well as Boss Crypto, a crypto investment and education platform. ----------------------------------------------------------------------------------- Connect with us on Social Media: Twitter: Facebook: Telegram: ---------------------------------------------------------------------------------- Looking for the best cryptocurrency wallets? Check these out: BitLox: CoolWallet S: Trezor: Ledger Nano S: KeepKey: Read about them here: --------------------------------------------------------------------------------- References: BossCrypto – Crypto Investment and Education Platform Boss Crypto Yahoo and Rakuten Continue Mainstream Progression Towards Cryptocurrency Rakuten Wallet Launch Announced for March 30, 2019 Here’s Why Crypto Maximine Coin (MXM) Jumped 754.5% in March eToro adds Tron TRX to its Platform with More than 10 Million Users -------------------------------------------------------------------------------- DISCLAIMER The information discussed on the Altcoin Buzz YouTube, Altcoin Buzz Ladies YouTube, Altcoin Buzz Podcast or other social media channels including but not limited to Twitter, Telegram chats, Instagram, facebook, website etc is not financial advice. This information is for educational, informational and entertainment purposes only. Any information and advice or investment strategies are thoughts and opinions only, relevant to accepted levels of risk tolerance of the writer, reviewer or narrator and their risk tolerance maybe different than yours. We are not responsible for your losses. Bitcoin and other cryptocurrencies are high-risk investments so please do your due diligence and consult the financial advisor before acting on any information provided. Copyright Altcoin Buzz Pte Ltd. All rights reserved.
Altcoin Buzz

Maximine Coin [MXM] Jumps 30% Higher While Top Currencies Continued Bleeding

In a continuous declining market, there’s one coin that stood out higher. Maximine Coin or MXM Coin broke a new record of entering into the graph of Top 40 cryptocurrencies with a spike of 30 percent over the past 24 hours. Why MXM Coin Pumped Higher? The value surge is quite surprising because the top crypto assets including Bitcoin, Ethereum, XRP, Litecoin, EOS, Bitcoin Cash, Binance Coin, and many other cryptos are on the way out. Nevertheless, the major concern comes after the spike of MXM Coin trading volume on CoinBene exchange which suspects to have wash trading activities. Despite this decline drive, Maximine Coin’s MXM token is ruling with rising volume among the top 40 cryptocurrencies. At the moment, the MXM’s average trading value counts $159,653,386 which has gained 30.11 Percent over the last 24 hours. Moreover, the coin is trading at $0.096818. Maximin Coin or MXM is presently available at a handful of crypto trading platforms including CoinBene, HitBTC, Coinbit, and Livecoin. Among these exchanges, the highest trading volume is split among CoinBene and HitBTC with pair of USDT, ETH, and BTC respectively. Is there anything Related to CoinBene’s Suspected Wash Trading? Looking closer at the coinmarketcap, the highest trading volume of MXM coin can be seen on CoinBene, the exchange which was once noticed of involving with wash trading activities as reported by Bitwise Asset Management. Reports further revealed that the volume is faked by the exchange itself that results to inflate actual numbers to catch user’s attention. Moreover, the exchange registered in Singapore and doesn’t need KYC for a user to have an account with. To note, the Bitforex exchange where MXM Coin will soon be listed is also based out in Singapore – moreover, Maximine Coin is reportedly registered in the same country. Additionally, recent reports reveal that the coin had gained mainstream concern from Bitforex, a Singapore based trading platform. The firm announced to list MXM coin on its exchange which many speculate and relate the coin’s significant performance with. Heard the news? 👂🏼 Seen the papers? 👀 If you haven't, keep your eyes glued to the screen and your ears wide open because @maximinecoin is coming to @bitforexcom ! 🍻🙌🏼 Details in link below! 👇🏼 — MaxiMine (@maximinecoin) March 26, 2019 What do you think about CoinBene and its trading contribution to Maximine Coin? share your opinion with us  The post Maximine Coin [MXM] Jumps 30% Higher While Top Currencies Continued Bleeding appeared first on Coingape.
By continuing to browse, you agree to the use of cookies. Read Privacy Policy to know more or withdraw your consent.