Azure news

An open, flexible, enterprise-grade cloud computing platform.

World latest news

Is Azure Profitable?

No one wants to buy into faltering products that are here today and gone tomorrow and the same is true of businesses with regard to their IT infrastructure. Given the productivity dependencies, a traditionally slow rate of return, and steep financial burden of IT infrastructure, CTOs and the like around the world often look to key performance indicators (KPI) including revenue, profitability, customer counts, and so forth, much like a prospective shareholder, in order to anticipate the viability of infrastructure as a service (IaaS) platforms such as AWS and Azure prior to making long-term investments in them.For example, if a service provider is breaking close to even or operating at a loss after 8 years, this could be indicative of operational inefficiencies, architectural oversights, and looming changes which increase the likelihood of costly outages. All of which may also highlight the possibility of rate hikes down the road or that the platform in question may not be standing up to the test of time. But if revenue and profit are on point, this is indicative of long-term stability and much less risk. Comparatively, it is like being given the choice of paddle boarding on a calm summer morning or in the middle of a winter snow storm.Needless to say, you can tell a lot about a solution by measuring its profitability. This is why common frontrunners such as Amazon, Apple, and Google are not bashful about disclosing individual revenue, profit, and user counts of their products. But this is also why the competition behind them tends to get creative rather than simply reporting on the same metrics. And when considering the lengths that Microsoft goes though in order to suppress the individual merits of Azure, I am forced to question just how profitable Azure truly is.Before anything else though, it’s worth highlighting that any refresher course focused on lying with statistics would remind you that omission is by far the easiest way to lie with them. Along with shying away from helpful metrics while promenading with misleading and less valuable metrics instead, essentially leveraging data like an octopus jetting its ink, their application in unison is a formidable and proven recipe for statistical gaslighting that companies behind the frontrunner of their industry love to resort to. Put simply, some businesses choose to plea ignorant and resort to diversionary tactics rather than throwing it on the table so to speak and shining themselves in a negative light with the truth; Microsoft is no exception to this. Apple is also guilty of this as of late.As mentioned before and as the #1 cloud infrastructure provider, Amazon happily reports on the individual merits of AWS by posting revenue, profit, and user counts. Why hide being the best? As the #2 cloud infrastructure provider though, Microsoft opts to bundle Azure’s earnings into a container called Intelligent Cloud which averages Azure’s revenue, profits, and losses with legacy server software such as Windows and SQL Server, Active Directory, Hyper-V and so on, making it impossible to compare the two platforms on equal ground.Azure reports total users but not revenue or profit outside of the Intelligent cloud (crutch?). Meanwhile, LinkedIn reports revenue, profit, and total users while omitting monthly usership statistics such as monthly active users (MAU). Although it’s Microsoft policy not to report on helpful metrics such as MAU, hence why LinkedIn no longer reports it, they seem to have no problem reporting it for Azure AD, Office 365, Windows, Edge, Cortana, Bing, Skype (ooops now they don’t), XBox, Minecraft, and other services that perpetuate the narrative of having a strong foothold in their market. With the above in mind, you can see how Microsoft seems to flop their policies tactically, but you can also see a clear trend of omissions being a tell just the same while seemingly trying to mask them with arbitrary policies.Sometimes not to speak is to speak with statistics and omitting data along with creative bundling tactics which Microsoft is leveraging at present are not exceptions to this, but the billion-dollar standard. Although they claim to be changed now, they still have the same general council, Brad Smith, that they’ve had since their laughable antitrust days, and we can also see Microsoft putting in extra effort into muddying the waters so to speak with run rates, obscure metrics, massive marketing overspending rather than simply presenting their data; as they have done historically.However, we can still speculate by giving Azure the benefit of the doubt and simply assume that it is solely responsible for all of the Intelligent Cloud’s revenue in FY18 Q4, which was $9,606,000,000, so that we can have a look at Azure’s average revenue per account (ARPA) on its best hypothetical day and compare it to AWS. While we’re at it, let’s also assume that Azure has 13 million accounts rather than their 12.8 million just to account for growth since this data is over a year old. So let’s take $9,606,000,000/13 million accounts = $738.92 average revenue per account for their latest quarter. Not bad. Good job Hypothetical Azure.AWS, on the other hand, has reported a paltry 1 million accounts subscribing to it at the moment while only generating 6.68 billion in revenue, 2.1 billion of which was profit, in FY18 Q3. So we can take $6,680,000,000/1 million accounts = $6,680 average revenue per account for their latest quarter. For the sake of comparison, we can then divide AWS’s ARPA by Azure’s ARPA ($6,680/$738.92) which shows us that AWS is monetizing its accounts 9.04x more effectively than Azure is. Also and if AWS could maintain this ARPA with Azure’s account-base (6,680*13,000,000), then it would be generating $86,840,000,000 in revenue per quarter. 😳AWS being 31.4% efficient (profit/revenue) while being more 9.04x more efficient than Azure when measured by its ARPA also indicates that Azure efficiency could be as low as 3.5% or $359,320,000 in Q4 which in itself would easily rationalize bundling it into a container such as the intelligent cloud comprised of more efficient products. This would also mean that Azure could be generating could be generating as little as $27.64 average profit per account compared to the $2,100 average profit per account that AWS is seeing at present which is still 76x more than Azure; when it is given a significant benefit of the doubt.These differences only become greater if Azure represented half of the intelligent cloud’s revenue at $4.8 billion though. If that were the case, then they would be making averaging $369.23 in revenue per account and netting anywhere in between $163 million on the low end to $1.5 billion in profit if they’re as efficient as Amazon this quarter. If accurate, this would also show AWS to be as much as 18x more efficient than Azure from the perspective of ARPA. But I digress.https://medium.com/media/2e748c4c40f1178be39c688d47b68616/hrefIn summary, when a data-driven technology company as equipped as Microsoft turns their pockets inside out instead of posting basic KPIs such as itemized profits or MAU while reporting on metrics that no one asked for instead with regard to a service that’s been in production for 8 years now, it is usually a consequence of these KPIs being contradictory of the narrative that they’re selling, not because they are not readily available. In lieu of these metrics and even when giving Azure the benefit of the doubt and compared to AWS as done above, there appears to be more disparity between AWS and Azure than Microsoft would like us to believe after being in production for 8 years. Azure may indeed be profitable, but when considering the disparity in operational efficiency between AWS and Azure and the exhaustive effort that Microsoft makes towards suppressing its individual performance metrics, I am given no option but to ask how profitable Azure is or whether it is profitable at all.Is Azure Profitable? was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Hackernoon

Making Sense of Azure Durable Functions

Stateful Workflows on top of Stateless Serverless Cloud Functions — this is the essence of the Azure Durable Functions library. That’s a lot of fancy words in one sentence, and they might be hard for the majority of readers to understand.Please join me on the journey where I’ll try to explain how those buzzwords fit together. I will do this in 3 steps:Describe the context of modern cloud applications relying on serverless architecture;Identify the limitations of basic approaches to composing applications out of the simple building blocks;Explain the solutions that Durable Functions offer for those problems.MicroservicesTraditionally, server-side applications were built in a style which is now referred to as Monolith. If multiple people and teams were developing parts of the same application, they mostly contributed to the same code base. If the code base were structured well, it would have some distinct modules or components, and a single team would typically own each module:Multiple components of a monolithic applicationUsually, the modules would be packaged together at build time and then deployed as a single unit, so a lot of communication between modules would stay inside the OS process.Although the modules could stay loosely coupled over time, the coupling almost always occurred on the level of the data store because all teams would use a single centralized database.This model works great for small- to medium-size applications, but it turns out that teams start getting in each other’s way as the application grows since synchronization of contributions takes more and more effort.As a complex but viable alternative, the industry came up with a revised service-oriented approach commonly called Microservices. The teams split the big application into “vertical slices” structured around the distinct business capabilities:Multiple components of a microservice-based applicationEach team then owns a whole vertical — from public communication contracts, or even UIs, down to the data storage. Explicitly shared databases are strongly discouraged. Services talk to each other via documented and versioned public contracts.If the borders for the split were selected well — and that’s the most tricky part — the contracts stay stable over time, and thin enough to avoid too much chattiness. This gives each team enough autonomy to innovate at their best pace and to make independent technical decisions.One of the drawbacks of microservices is the change in deployment model. The services are now deployed to separate servers connected via a network:Challenges of communication between distributed componentsNetworks are fundamentally unreliable: they work just fine most of the time, but when they fail, they fail in all kinds of unpredictable and least desirable manners. There are books written on the topic of distributed systems architecture. TL;DR: it’s hard.A lot of the new adopters of microservices tend to ignore such complications. REST over HTTP(S) is the dominant style of connecting microservices. Like any other synchronous communication protocol, it makes the system brittle.Consider what happens when one service becomes temporary unhealthy: maybe its database goes offline, or it’s struggling to keep up with the request load, or a new version of the service is being deployed. All the requests to the problematic service start failing — or worse — become very slow. The dependent service waits for the response, and thus blocks all incoming requests of its own. The error propagates upstream very quickly causing cascading failures all over the place:Error in one component causes cascading failuresThe application is down. Everybody screams and starts the blame war.Event-Driven ApplicationsWhile cascading failures of HTTP communication can be mitigated with patterns like a circuit breaker and graceful degradation, a better solution is to switch to the asynchronous style of communication as the default. Some kind of persistent queueing service is used as an intermediary.The style of application architecture which is based on sending events between services is known as Event-Driven. When a service does something useful, it publishes an event — a record about the fact which happened to its business domain. Another service listens to the published events and executes its own duty in response to those facts:Communication in event-driven applicationsThe service that produces events might not know about the consumers. New event subscribers can be introduced over time. This works better in theory than in practice, but the services tend to get coupled less.More importantly, if one service is down, other services don’t catch fire immediately. The upstream services keep publishing the events, which build up in the queue but can be stored safely for hours or days. The downstream services might not be doing anything useful for this particular flow, but it can stay healthy otherwise.However, another potential issue comes hand-in-hand with loose coupling: low cohesion. As Martin Fowler notices in his essay What do you mean by “Event-Driven”:It’s very easy to make nicely decoupled systems with event notification, without realizing that you’re losing sight of the larger-scale flow.Given many components that publish and subscribe to a large number of event types, it’s easy to stop seeing the forest for the trees. Combinations of events usually constitute gradual workflows executed in time. A workflow is more than the sum of its parts, and understanding of the high-level flow is paramount to controlling the system behavior.Hold this thought for a minute; we’ll get back to it later. Now it’s time to talk cloud.CloudThe birth of public cloud changed the way we architect applications. It made many things much more straightforward: provisioning of new resources in minutes instead of months, scaling elastically based on demand, and resiliency and disaster recovery at the global scale.It made other things more complicated. Here is the picture of the global Azure network:Azure locations with network connectionsThere are good reasons to deploy applications to more than one geographical location: among others, to reduce network latency by staying close to the customer, and to achieve resilience through geographical redundancy. Public Cloud is the ultimate distributed system. As you remember, distributed systems are hard.There’s more to that. Each cloud provider has dozens and dozens of managed services, which is the curse and the blessing. Specialized services are great to provide off-the-shelf solutions to common complex problems. On the flip side, each service has distinct properties regarding consistency, resiliency and fault tolerance.In my opinion, at this point developers have to embrace the public cloud and apply the distributed system design on top of it. If you agree, there is an excellent way to approach it.ServerlessThe slightly provocative term serverless is used to describe cloud services that do not require provisioning of VMs, instances, workers, or any other fixed capacity to run custom applications on top of them. Resources are allocated dynamically and transparently, and the cost is based on their actual consumption, rather than on pre-purchased capacity.Serverless is more about operational and economical properties of the system than about the technology per se. Servers do exist, but they are someone else’s concern. You don’t manage the uptime of serverless applications: the cloud provider does.On top of that, you pay for what you use, similar to the consumption of other commodity resources like electricity. Instead of buying a generator to power up your house, you just purchase energy from the power company. You lose some control (e.g., no way to select the voltage), but this is fine in most cases. The great benefit is no need to buy and maintain the hardware.Serverless compute does the same: it supplies standard services on a pay-per-use basis.If we talk more specifically about Function-as-a-Service offerings like Azure Functions, they provide a standard model to run small pieces of code in the cloud. You zip up the code or binaries and send it to Azure; Microsoft takes care of all the hardware and software required to run it. The infrastructure automatically scales up or down based on demand, and you pay per request, CPU time and memory that the application consumed. No usage — no bill.However, there’s always a “but”. FaaS services come with an opinionated development model that applications have to follow:Event-Driven: for each serverless function you have to define a specific trigger — the event type which causes it to run, be it an HTTP endpoint or a queue message;Short-Lived: functions can only run up to several minutes, and preferably for a few seconds or less;Stateless: as you don’t control where and when function instances are provisioned or deprovisioned, there is no way to store data within the process between requests reliably; external storage has to be utilized.Frankly speaking, the majority of existing applications don’t really fit into this model. If you are lucky to work on a new application (or a new module of it), you are in better shape.A lot of the serverless applications may be designed to look somewhat similar to this example from the Serverless360 blog:Sample application utilizing “serviceful” serverless architectureThere are 9 managed Azure services working together in this app. Most of them have a unique purpose, but the services are all glued together with Azure Functions. An image is uploaded to Blob Storage, an Azure Function calls Vision API to recognize the license plate and send the result to Event Grid, another Azure Function puts that event to Cosmos DB, and so on.This style of cloud applications is sometimes referred to as Serviceful to emphasize the heavy usage of managed services “glued” together by serverless functions.Creating a comparable application without any managed services would be a much harder task, even more so, if the application has to run at scale. Moreover, there’s no way to keep the pay-as-you-go pricing model in the self-service world.The application pictured above is still pretty straightforward. The processes in enterprise applications are often much more sophisticated.Remember the quote from Martin Fowler about losing sight of the large-scale flow. That was true for microservices, but it’s even more true for the “nanoservices” of cloud functions.I want to dive deeper and give you several examples of related problems.Challenges of Serverless CompositionFor the rest of the article, I’ll define an imaginary business application for booking trips to software conferences. In order to go to a conference, I need to buy tickets to the conference itself, purchase the flights, and book a room at a hotel.In this scenario, it makes sense to create three Azure Functions, each one responsible for one step of the booking process. As we prefer message passing, each Function emits an event which the next function can listen for:Conference booking applicationThis approach works, however, problems do exist.Flexible SequencingAs we need to execute the whole booking process in sequence, the Azure Functions are wired one after another by configuring the output of one function to match with the event source of the downstream function.In the picture above, the functions’ sequence is hard-defined. If we were to swap the order of booking the flights and reserving the hotel, that would require a code change — at least of the input/output wiring definitions, but probably also the functions’ parameter types.In this case, are the functions really decoupled?Error HandlingWhat happens if the Book Flight function becomes unhealthy, perhaps due to the outage of the third-party flight-booking service? Well, that’s why we use asynchronous messaging: after the function execution fails, the message returns to the queue and is picked up again by another execution.However, such retries happen almost immediately for most event sources. This might not be what we want: an exponential back-off policy could be a smarter idea. At this point, the retry logic becomes stateful: the next attempt should “know” the history of previous attempts to make a decision about retry timing.There are more advanced error-handling patterns too. If executions failures are not intermittent, we may decide to cancel the whole process and run compensating actions against the already completed steps.An example of this is a fallback action: if the flight is not possible (e.g., no routes for this origin-destination combination), the flow could choose to book a train instead:Fallback after 3 consecutive failuresThis scenario is not trivial to implement with stateless functions. We could wait until a message goes to the dead-letter queue and then route it from there, but this is brittle and not expressive enough.Parallel ActionsSometimes the business process doesn’t have to be sequential. In our reservation scenario, there might be no difference whether we book a flight before a hotel or vice versa. It could be desirable to run those actions in parallel.Parallel execution of actions is easy with the pub-sub capabilities of an event bus: both functions should subscribe to the same event and act on it independently.The problem comes when we need to reconcile the outcomes of parallel actions, e.g., calculate the final price for expense reporting purposes:Fan-out / fan-in patternThere is no way to implement the Report Expenses block as a single Azure Function: functions can’t be triggered by two events, let alone correlate two related events.The solution would probably include two functions, one per event, and the shared storage between them to pass information about the first completed booking to the one who completes last. All this wiring has to be implemented in custom code. The complexity grows if more than two functions need to run in parallel.Also, don’t forget the edge cases. What if one of the function fails? How do you make sure there is no race condition when writing and reading to/from the shared storage?Missing OrchestratorAll these examples give us a hint that we need an additional tool to organize low-level single-purpose independent functions into high-level workflows.Such a tool can be called an Orchestrator because its sole mission is to delegate work to stateless actions while maintaining the big picture and history of the flow.Azure Durable Functions aims to provide such a tool.Introducing Azure Durable FunctionsAzure FunctionsAzure Functions is the serverless compute service from Microsoft. Functions are event-driven: each function defines a trigger — the exact definition of the event source, for instance, the name of a storage queue.Azure Functions can be programmed in several languages. A basic Function with a Storage Queue trigger implemented in C# would look like this:https://medium.com/media/acdc9bcdb0c7bd4b51167c060ae25f83/hrefThe FunctionName attribute exposes the C# static method as an Azure Function named MyFirstFunction. The QueueTrigger attribute defines the name of the storage queue to listen to. The function body logs the information about the incoming message.Durable FunctionsDurable Functions is a library that brings workflow orchestration abstractions to Azure Functions. It introduces a number of idioms and tools to define stateful, potentially long-running operations, and manages a lot of mechanics of reliable communication and state management behind the scenes.The library records the history of all actions in Azure Storage services, enabling durability and resilience to failures.Durable Functions is open source, Microsoft accepts external contributions, and the community is quite active.Currently, you can write Durable Functions in 3 programming languages: C#, F#, and Javascript (Node.js). All my examples are going to be in C#. For Javascript, check this quickstart and these samples. For F# see the samples, my walkthrough and stay tuned for another article soon.Workflow building functionality is achieved by the introduction of two additional types of triggers: Activity Functions and Orchestrator Functions.Activity FunctionsActivity Functions are simple stateless single-purpose building blocks that do just one task and have no awareness of the bigger workflow. A new trigger type, ActivityTrigger, was introduced to expose functions as workflow steps, as I explain below.Here is a simple Activity Function implemented in C#:https://medium.com/media/e7d43a0046a68674e1ef7e970b2a8a4e/hrefIt has a common FunctionName attribute to expose the C# static method as an Azure Function named BookConference. The name is important because it is used to invoke the activity from orchestrators.The ActivityTrigger attribute defines the trigger type and points to the input parameter conference which the activity expects to get for each invocation.The function can return a result of any serializable type; my sample function returns a simple property bag called ConfTicket.Activity Functions can do pretty much anything: call other services, load and save data from/to databases, and use any .NET libraries.Orchestrator FunctionsThe Orchestrator Function is a unique concept introduced by Durable Functions. Its sole purpose is to manage the flow of execution and data among several activity functions.Its most basic form chains multiple independent activities into a single sequential workflow.Let’s start with an example which books a conference ticket, a flight itinerary, and a hotel room one-by-one:3 steps of a workflow executed in sequenceThe implementation of this workflow is defined by another C# Azure Function, this time with OrchestrationTrigger:https://medium.com/media/a6bc00d93b14fef2d1c9d85e64f20259/hrefAgain, attributes are used to describe the function for the Azure runtime.The only input parameter has type DurableOrchestrationContext. This context is the tool that enables the orchestration operations.In particular, the CallActivityAsync method is used three times to invoke three activities one after the other. The method body looks very typical for any C# code working with a Task-based API. However, the behavior is entirely different. Let's have a look at the implementation details.Behind the ScenesLet’s walk through the lifecycle of one execution of the sequential workflow above.When the orchestrator starts running, the first CallActivityAsync invocation is made to book the conference ticket. What actually happens here is that a queue message is sent from the orchestrator to the activity function.The corresponding activity function gets triggered by the queue message. It does its job (books the ticket) and returns the result. The activity function serializes the result and sends it as a queue message back to the orchestrator:Messaging between the orchestrator and the activityWhen the message arrives, the orchestrator gets triggered again and can proceed to the second activity. The cycle repeats — a message gets sent to Book Flight activity, it gets triggered, does its job, and sends a message back to the orchestrator. The same message flow happens for the third call.Stop-resume behaviorAs discussed earlier, message passing is intended to decouple the sender and receiver in time. For every message in the scenario above, no immediate response is expected.On the C# level, when the await operator is executed, the code doesn't block the execution of the whole orchestrator. Instead, it just quits: the orchestrator stops being active and its current step completes.Whenever a return message arrives from an activity, the orchestrator code restarts. It always starts with the first line. Yes, this means that the same line is executed multiple times: up to the number of messages to the orchestrator.However, the orchestrator stores the history of its past executions in Azure Storage, so the effect of the second pass of the first line is different: instead of sending a message to the activity it already knows the result of that activity, so await returns this result back and assigns it to the conference variable.Because of these “replays”, the orchestrator’s implementation has to be deterministic: don’t use DateTime.Now, random numbers or multi-thread operations; more details here.Event SourcingAzure Functions are stateless, while workflows require a state to keep track of their progress. Every time a new action towards the workflow’s execution happens, the framework automatically records an event in table storage.Whenever an orchestrator restarts the execution because a new message arrives from its activity, it loads the complete history of this particular execution from storage. Durable Context uses this history to make decisions whether to call the activity or return the previously stored result.The pattern of storing the complete history of state changes as an append-only event store is known as Event Sourcing. Event store provides several benefits:Durability — if a host running an orchestration fails, the history is retained in persistent storage and is loaded by the new host where the orchestration restarts;Scalability — append-only writes are fast and easy to spread over multiple storage servers;Observability — no history is ever lost, so it’s straightforward to inspect and analyze even after the workflow is complete.Here is an illustration of the notable events that get recorded during our sequential workflow:Log of events in the course of orchestrator progressionBillingAzure Functions on the serverless consumption-based plan are billed per execution + per duration of execution.The stop-replay behavior of durable orchestrators causes the single workflow “instance” to execute the same orchestrator function multiple times. This also means paying for several short executions.However, the total bill usually ends up being much lower compared to the potential cost of blocking synchronous calls to activities. The price of 5 executions of 100 ms each is significantly lower than the cost of 1 execution of 30 seconds.By the way, the first million executions per month are at no charge, so many scenarios incur no cost at all from Azure Functions service.Another cost component to keep in mind is Azure Storage. Queues and Tables that are used behind the scenes are charged to the end customer. In my experience, this charge remains close to zero for low- to medium-load applications.Beware of unintentional eternal loops or indefinite recursive fan-outs in your orchestrators. Those can get expensive if you leave them out of control.Error-handling and retriesWhat happens when an error occurs somewhere in the middle of the workflow? For instance, a third-party flight booking service might not be able to process the request:One activity is unhealthyThis situation is expected by Durable Functions. Instead of silently failing, the activity function sends a message containing the information about the error back to the orchestrator.The orchestrator deserializes the error details and, at the time of replay, throws a .NET exception from the corresponding call. The developer is free to put a try .. catch block around the call and handle the exception:https://medium.com/media/7e56bf5dddaacc1fba200bed897dd01a/hrefThe code above falls back to a “backup plan” of booking another itinerary. Another typical pattern would be to run a compensating activity to cancel the effects of any previous actions (un-book the conference in our case) and leave the system in a clean state.Quite often, the error might be transient, so it might make sense to retry the failed operation after a pause. It’s a such a common scenario that Durable Functions provides a dedicated API:https://medium.com/media/e47d6ba2683f6e9b7bf3f97ae0f3be5e/hrefThe above code instructs the library toRetry up to 5 timesWait for 1 minute before the first retryIncrease delays before every subsequent retry by the factor of 2 (1 min, 2 min, 4 min, etc.)The significant point is that, once again, the orchestrator does not block while awaiting retries. After a failed call, a message is scheduled for the moment in the future to re-run the orchestrator and retry the call.Sub-orchestratorsBusiness processes may consist of numerous steps. To keep the code of orchestrators manageable, Durable Functions allows nested orchestrators. A “parent” orchestrator can call out to child orchestrators via the context.CallSubOrchestratorAsync method:https://medium.com/media/a11f07db45609fba6cf91bf4c040298b/hrefThe code above books two conferences, one after the other.Fan-out / Fan-inWhat if we want to run multiple activities in parallel?For instance, in the example above, we could wish to book two conferences, but the booking order might not matter. Still, when both bookings are completed, we want to combine the results to produce an expense report for the finance department:Parallel calls followed by a final stepIn this scenario, the BookTrip orchestrator accepts an input parameter with the name of the conference and returns the expense information. ReportExpenses needs to receive both expenses combined.This goal can be easily achieved by scheduling two tasks (i.e., sending two messages) without awaiting them separately. We use the familiar Task.WhenAll method to await both and combine the results:https://medium.com/media/cc47e8819b86dc0b4dd80a09a892b5fc/hrefRemember that awaiting the WhenAll method doesn't synchronously block the orchestrator. It quits the first time and then restarts two times on reply messages received from activities. The first restart quits again, and only the second restart makes it past the await.Task.WhenAll returns an array of results (one result per each input task), which is then passed to the reporting activity.Another example of parallelization could be a workflow sending e-mails to hundreds of recipients. Such fan-out wouldn’t be hard with normal queue-triggered functions: simply send hundreds of messages. However, combining the results, if required for the next step of the workflow, is quite challenging.It’s straightforward with a durable orchestrator:https://medium.com/media/1339579b0b305fbb744c096e2e6c0adc/hrefMaking hundreds of roundtrips to activities and back could cause numerous replays of the orchestrator. As an optimization, if multiple activity functions complete around the same time, the orchestrator may internally process several messages as a batch and restart the orchestrator function only once per batch.Other ConceptsThere are many more patterns enabled by Durable Functions. Here is a quick list to give you some perspective:Waiting for the first completed task in a collection (rather than all of them) using the Task.WhenAny method. Useful for scenarios like timeouts or competing actions.Pausing the workflow for a given period or until a deadline.Waiting for external events, e.g., bringing human interaction into the workflow.Running recurring workflows, when the flow repeats until a certain condition is met.Further explanation and code samples are in the docs.ConclusionI firmly believe that serverless applications utilizing a broad range of managed cloud services are highly beneficial to many companies, due to both rapid development process and the properly aligned billing model.Serverless tech is still young; more high-level architectural patterns need to emerge to enable expressive and composable implementations of large business systems.Azure Durable Functions suggests some of the possible answers. It combines the clarity and readability of sequential RPC-style code with the power and resilience of event-driven architecture.The documentation for Durable Functions is excellent, with plenty of examples and how-to guides. Learn it, try it for your real-life scenarios, and let me know your opinion — I’m excited about the serverless future!AcknowledgmentsMany thanks to Katy Shimizu, Chris Gillum, Eric Fleming, KJ Jones, William Liebenberg, Andrea Tosato for reviewing the draft of this article and their valuable contributions and suggestions. The community around Azure Functions and Durable Functions is superb!Originally published at mikhail.io.Making Sense of Azure Durable Functions was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Hackernoon

Microsoft Releases Cloud-Based Azure Development Kit

According to an announcement published on Nov. 15, one of the leading software companies in the world Microsoft has released a cloud-based Azure development kit that is powered by […] The post Microsoft Releases Cloud-Based Azure Development Kit appeared first on UseTheBitcoin.
Use The Bitcoin

Microsoft’s Azure MFA down for second time in two weeks

Microsoft’s Azure Multi-Factor Authentication (MFA) service went down for the second time in just over a week. The problem occurred on Nov. 27 around 9:15 am Eastern when several Office 365 users began reporting on Twitter that they were unable to log into their service due to MFA issues. Microsoft’s Azure status dashboard was updated around 10:15 a.m. ET suggestting the possibility of a cross-region potential outage impacting MFA. “Impacted customers may experience failures when attempting to authenticate into Azure resources where MFA is required by policy,” the dashboard status said. “Engineers are investigating the issue and the next update will be provided in 60 minutes or as events warrant.” Just over a week earlier, Office 365 users were hit by another multi-factor authentication issue which left users across the globe unable to sign into their Microsoft services. Mimecas Product Manager Pete Banham told SC Media that with two outages in just over a week, any mitigation costs have likely paid for themselves many times over already. “When everything is working smoothly, it can be easy for an organization to judge a backup plan as an unnecessary expense, but as incidents like this show, the potential disruption could come at an even greater price,” Banham said. “This latest outage makes clear that relying on a single supplier is simply not an option,” he said. “Every organization must evaluate whether they have the right cyber resilience and continuity plan in place to stay up and running regardless of any future incidents.” The post Microsoft’s Azure MFA down for second time in two weeks appeared first on SC Media.
SC Media

How Azure AD Could Be Vulnerable to Brute-Force and DOS Attacks

Source: https://www.microsoft.com/en-us/microsoft-365/blog/2017/11/13/how-organizations-are-connecting-their-on-premises-identities-to-azure-ad/Azure AD is the de facto gatekeeper of Microsoft cloud solutions such as Azure, Office 365, Enterprise Mobility. As an integral component of their cloud ecosystem, it is serving roughly 12.8 million organizations, 950+ million users worldwide, and 90% of Fortune 500 companies on a growing annual basis. Given such a resume, one might presume that Azure Active Directory is secure, but is it?Despite Microsoft itself proclaiming “Assume Breach” as the guiding principle of their security strategy, if you were to tell me a week ago that Azure or Office 365 was vulnerable to rudimentary attacks and that it could not be considered secure, then I probably would have even laughed you out of the room. But when a client of ours recently had several of their Office 365 mailboxes compromised by a simple brute-force attack, I was given no alternative but to question the integrity of Azure AD as a whole instead of attributing the breach to the services merely leveraging it and what I found wasn’t reassuring.After a simple “Office 365 brute force” search on google and without even having to write a line of code, I found that I was late to the party and that Office 365 is indeed susceptible to brute force and password spray attacks via remote Powershell (RPS). It was further discovered that these vulnerabilities are actively being exploited on a broad scale while remaining incredibly difficult to detect during or after the fact. Skyhigh Networks named this sort of attack “Knock Knock” and went so far as estimating that as many as 50% of all tenants are actively being attacked at any given time. Even worse, it seems as if there is no way to correct this within Azure AD without consequently rendering yourself open to denial of service (DOS) attacks.Source: https://cssi.us/office-365-brute-force-powershell/In fact, this sort of attack is so prevalent that it happens to be one of the biggest threats to cloud tenant security at Microsoft according to Mark Russonivich (CTO of Azure) and is among several reasons that Microsoft itself advises their customers to enable multi-factor authentication (MFA) for all users and implement advanced threat intelligence available only to E5 subscription levels or greater; basically requiring companies to give Microsoft more money to secure their own solutions. But MFA also doesn’t impede hackers from cracking passwords or protect businesses from a DOS attack nor does it help those that are unaware of its necessity as many tenants are at present.Source: https://docs.microsoft.com/en-us/powershell/exchange/exchange-online/connect-to-exchange-online-powershell/mfa-connect-to-exchange-online-powershell?view=exchange-psFurther and since RPS does not work with deferred authentication (DAP) and MFA, partners consisting of consultants, managed services and support providers also cannot use their partner credentials to connect to the tenants of their clients via RPS for advanced administration and scripting. Even though they can easily manage their clients via a browser-based admin center with MFA, they often have to resort to creating admin accounts within Office 365 tenant itself instead, but others do it simply for ease of access to the admin console or for when they are not the Partner On Record. In turn, these accounts are precisely what many of these attacks are targeting, often unbeknownst to admins, and Deloitte’s breach is a perfect example of such a scenario.Unfortunately, these accounts are often stripped of MFA security to make them more convenient and accessible for the multitude of support and operations staff to use while working for various companies offering support services and they seldom change upon company exit or expire. By default in Office 365 and on top of being vulnerable to being cracked and breached, the password expiration policy is further set to a 730-day expiration and further disabled, rendering accounts vulnerable to a prolonged breach at that. Needless to say, they are ripe for attack and this exact scenario is what enabled a hacker to have unabridged administrative access to Deloitte’s Exchange Online tenant for 6+ months.Complicating matters even further, the natural solution to this problem renders the tenant vulnerable to DOS attacks by virtue of being able to lock users out of their accounts for a fixed duration imposed by Azure AD; but this is still in preview phases. For example, by default Azure AD Smart Lockout (Preview Stage), which is still in preview, is configured to allow 10 password attempts before subjecting the account to a 60-second lockout, giving attackers a theoretical limit of 14,400 attempts per account/per day. You could decrease the threshold to 5 and increase the duration to 5 minutes protect against breaches, reducing attempts to 1,440 per day, but this would create the potential for downtime for users whenever their accounts are being attacked with brute force and password spray attacks.Source: https://cssi.us/office-365-brute-force-powershell/However, Tyler Rusk at CSSI also called out that Microsoft doesn’t seem to throttle or limit authentication attempts made through RPS. As shown, Tyler was able to surpass the theoretical 14,400 per day limit listed in Azure AD Smart Lockout Preview without added logic, moving at a rate of 48,000 per day had he let it run for a 24 hour period or an est. 17,520,000 attempts over 365 days. However, there are obvious ways to optimize these efforts even further through via background jobs (start-job cmdlet) by essentially running attacks asynchronously instead of synchronously while optimizing for custom lockout limits, max attempts, and minimal detection. The possibilities are endless with regard to password spray attacks for obvious reasons. To be fair to Tyler and CSSI though and in my opinion, they didn’t need to leverage such measures to validate their concern.If their lockout feature were to work though and if you were able to reduce the threat surface in the manner above, you would then have to contend with the hard countdown of the duration time. It’s immutable which means that users have to wait for it expire in order to render the account accessible again. The unlock cannot be expedited administratively at present. As such, it can just as easily result in an intentional DOS for end users if they or an unintentional DOS while running the possibility of exposing the attack; that is when/if it starts actually working. Obviously protecting from breach takes precedent over downtime, but becoming prone to DOS attacks is hardly a consolation prize.body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}Account Lockouts: for when you want malware and lazy users to cause denial of service attacks because you're too cheap to use proper intrusion protection software. — @NerdPylefunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height); resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}Banned passwords nor MFA cannot protect against DOS or brute-force attacks either, only against the breach itself. In fact, when brute forcing an account protected by MFA, the MFA challenge itself can be treated as confirmation of a valid cracked username and/or password. In turn, they can then begin to try these credentials in other places which may not be protected by MFA as users and admins alike tend to keep them as similar as possible in multiple directories so that they’re easy to remember. I’ll defer to Ned Pyle of Microsoft as to whether this applies to his employer and their partners.Summarizing matters thus far, you can brute force accounts housed in Azure AD used via RPS. Obvious solutions for this such as MFA, customized password blocking, and advanced threat intelligence are either ineffective, insufficient, paywalled, and/or generate significantly more overhead in order to offset these vulnerabilities. Further, these solutions are often ignored by lazy admins, consultants, and managed services providers and many may be oblivious to this threat entirely; possibly even to breaches of their own. Deloitte has proven that this can even hit the best of them.As offensive as all of this may seem though, it’s important to remember that AD was never designed to be public facing, quite the opposite. It has actually always been inherently vulnerable to brute-force, password spray, and DOS attacks by design. AD has always been designed to be implemented in conjunction with various other counter-measures in order to maintain its integrity. This includes but certainly is not limited to relying on physical security measures such as controlled entry and limiting the ability to access the domain to those that make it past physical security measures successfully; with the obvious exception of VPN users. This is nothing new.That said, AD was never, ever, meant to be the sole source of security for IT infrastructure and is fundamentally dependent on other security measures in order to be effective. Consequently, AD becomes markedly more vulnerable when other pre-emptive methods fail or are non-existent. Put simply, such breaches should be the expectation when depending on Azure AD alone for IT security, and this sadly applies to any Office 365 tenant with its default security settings. However, understanding its limitations helps us illuminate ways to harden Azure AD and mitigate these problems just the same.It almost goes without saying, but none of the measures to necessary to patch these vulnerabilities are free to companies leveraging these services at present. Even if Microsoft were to fix this, who is to say that something else just as simplistic and embarrassing isn’t hiding around in the corner or already being used? That said, avoiding products backed by a 20-year-old security system streamlined for vendor lock-in seems like a viable solution to avoiding this problem in the first place.Source: https://www.microsoft.com/en-us/microsoft-365/blog/2017/11/13/how-organizations-are-connecting-their-on-premises-identities-to-azure-ad/Before anything else, I truly think that the onus is on Microsoft to ensure that their baseline configuration for cloud accounts doesn’t expose their tenants unnecessarily. Sure, we could blame ignorant users and lazy admins, but I don’t think that this is fair given the scope of this vulnerability, which is essentially 46% of AzureAD’s user-base (password hash sync + cloud only = 46%). It is unknown how many have MFA enabled and the scope of this is ultimately an unknown both with regard to those who are vulnerable to it, actively being attacked, and/or those already breached though. But as a former tier 3 support engineer for Exchange Online at Microsoft, I can confirm that a significant amount of individuals as well as small-medium businesses are relying on Azure AD exclusively without further counter-measures and that they account for a sizable amount of Office 365’s user-base.Microsoft has clearly acknowledged this problem, but rather than hardening their tenants from such attacks as other cloud services have, they have offered solutions only available to their high tier plans so as to capitalize on this problem rather than fixing it. As expensive as they are to migrate away from now, or sticky as they like to call it, their products are just going to become more costly to manage, vulnerable, and difficult to migrate away from over time. This is the malady of any legacy solution.One easy way for Microsoft to mitigate such attacks is to update their RPS module to support DAP and develop other creative avenues for admins and the like to efficiently and securely manage their clients’ tenants. They should also extend their threat intelligence and advanced customizations available only to costly, high tier license subscribers to all license levels, at least until proper solutions are implemented for all tenant levels.As an immediate mitigation step though, Microsoft could simply swap the order of authentication. Rather than requiring a password prior to doing a two-step verification on your phone, they could require the phone verification through authenticator app or a third party MFA app such as Duo as the initial means of authentication. By deferring their password in Azure AD as the second step instead of the first, they could buffer its weak password security at present and buy time to implement a proper solution. However, this only applies to users and tenants with MFA enabled and in-use.Just as Active Directory seems to create necessity for other costly ancillary solutions, Microsoft seems to have built AzureAD to generate further necessity for more costly solutions coincidentally offered by them just the same. On top of this and if they had their way, their solution to enable MFA would also require employers to buy phones and mobile plans for two-step verification for all of their employees which can cost more on an annual basis than any of their plans.The same can be said of the costs associated with a proper MFA solution and/or an on-premises or hosted ADFS solution (if none exist) as they drastically complicate the solution as a whole while consequently inflating the ownership costs associated with it. As complexity increases, stability falters while costs skyrocket. All of which is why I recommend avoiding their solutions entirely.Source: https://blogs.partner.microsoft.com/mpn/create-stickiness-with-ip/But if a company is entrenched with Microsoft products and migration is out of reach, there are options. One solution that companies can implement is ADFS which defers authentication attempts to your own domain controllers on-premise rather than Azure AD while immediately granting more granular control of password policies with Active Directory on-premise and as much protection as money can buy on the network layer. All of which can be quite costly from a licensing perspective alone, let alone the hardware, network infrastructure, and labor required to implement it all let alone the staff to maintain it. This creates a single point of failure unless implemented in a highly available manner.They can also implement an MFA solution as well but there still remains added exposure and vulnerabilities which may require further consideration. But as mentioned before, there are also added costs and MFA may not protect accounts entirely. Users tend to manually synchronize their passwords across multiple platforms for the sake of remembering it, but not all of them have the same protections, MFA or otherwise. Similar to ADFS, access to your mailbox and other apps are restricted when MFA services are degraded, also becoming a single point of failure, as shown today by Azures MFA outage. So if you go with an MFA solution, diversify with a 3rd party MFA provider.While the existence of dirsync can do little to protect against brute-force attacks, enforcing a strong password policy including a customized banned password list on premise can be mirrored in the cloud. Customers with dirsync already pay for this functionality with Active Directory on premise and can simply have it be mirrored in the accounts synced to the Azure AD forest. Although this cannot protect from brute force, password spray, or denial of service attacks, it can absolutely harden accounts against prolonged breaches.I suppose they could also call support to complain about it and see if they’ll fix it, but you will likely be met by someone difficult to understand without experience on such matters. Or maybe they could even get a technical account manager to yell into the void or possibly even find someone with half of an ass on your behalf if you have deep enough pockets for a premier membership. While you’re at it, maybe you could upgrade your E3 plan to an E5 plan at almost double your monthly cost of E3 just to pay Microsoft to compensate for its own vulnerabilities.In summary, Microsoft services built on Azure AD along with the businesses leveraging them are vulnerable to brute-force and password spray attacks which can be carried out by anyone with the capacity to run a script in RPS. Also, there isn’t an adequate means of hardening these services without incurring significant financial burden and paying for more of Microsofts services. All of which has probably been the case for as long as the ability to access tenants via RPS has been widely available to admins and ultimately why you would be wise to assume breach with Microsoft cloud solutions just as Microsoft does. Entities can absolutely mitigate these vulnerabilities, but Office 365 and Azure would cease to function as true cloud solutions while generating significantly more overhead costs in the process.All things considered, it seems as if there is no way to harden Azure AD or the services such as Azure or Office 365 when leveraged by itself without incurring significant costs in addition to the aforementioned introduction of further complexity, points of failure, and on-premise dependencies for your cloud architecture. This is not to say that Azure cannot be made to be secure, but it comes at a cost while sacrificing cloud resiliencies. Although they advise others to assume breach, Microsoft seems to be omitting this reality from Office 365 and Azure advertisements and such inconsistencies are indicative of this stance being more of a cop out than a tenable security strategy because of this.How Azure AD Could Be Vulnerable to Brute-Force and DOS Attacks was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Hackernoon

Microsoft Unveils Its Azure Blockchain Development Kit

Microsoft has announced the first iteration of the Azure Blockchain Development Kit. The set of tools is designed specifically for developers looking to use the resources developed by the technology giant to create their own business-focused blockchain systems. Azure Development Kit The announcement was made by Marc Mercuri who is the Principal Program Manager of the Blockchain Engineering team at...Read More. The post by Alexander Lielacher appeared first on BTCManager, Bitcoin, Blockchain & Cryptocurrency News
BTC Manager
More news sources

Azure news by Finrazor

Trending

Hot news

Hot world news

Grayscale Adds Stellar as Latest Cryptocurrency Investment Trust

Grayscale Adds Stellar as Latest Cryptocurrency Investment Trust Digital currency investment group Grayscale confirmed it had successfully launched its latest fund, dedicated to Stellar’s Lumens (XLM) token, in a tweet Jan. 17. Grayscale, which now operates nine cryptocurrency funds, timed the move to coincide with a change of image for its products, renaming all its […] Cet article Grayscale Adds Stellar as Latest Cryptocurrency Investment Trust est apparu en premier sur Bitcoin Central.
Bitcoin Central

Researches from MIT, Stanford Set to Replace Bitcoin with Their Groundbreaking Crypto Project

CoinSpeaker Researches from MIT, Stanford Set to Replace Bitcoin with Their Groundbreaking Crypto Project Until now, everybody has been talking about Bitcoin, the most popular and widely used digital currency. However, Bitcoin is unable to process thousands of transactions a second. Researchers from the Massachusetts Institute of Technology (MIT), UC-Berkeley, Stanford University, Carnegie Mellon University, University of Southern California, and the University of Washington have decided to fix such a weakness and develop a crypto asset better than Bitcoin. The researchers are working together as Distributed Technology Research (DTR), a non-profit organization based in Switzerland and backed by hedge fund Pantera Capital. The first initiative of Distributed Technology Research is the Unit-e, a virtual coin that is expected to solve bitcoin’s scalability issues while holding true to a decentralized model and process transactions faster than even Visa or Mastercard. Babak Dastmaltschi, Chairman of the DTR Foundation Council, said: “The blockchain and digital currency markets are at an interesting crossroads, reminiscent of the inflection points reached when industries such as telecom and the internet were coming of age. These are transformative times. We are nearing the point where every person in the world is connected together. Advancements in distributed technologies will enable open networks, avoiding the need for centralized authorities. DTR was formed with the goal of enabling and supporting this revolution, and it is in this vein that we unveil Unit-e.” According to the press release, Unit-e will be able to process 10,000 transactions per second. That’s worlds away from the current average of between 3.3 and 7 transactions per second for Bitcoin and 10 to 30 transactions for Ethereum. Joey Krug, a member of the DTR Foundation Council and Co-Chief Investment Officer at Pantera Capital, believes that a lack of scalability is holding back cryptocurrency mass adoption. He said: “We are on the cusp of something where if this doesn’t scale relatively soon, it may be relegated to ideas that were nice but didn’t work in practice: more like 3D printing than the internet.” The project’s ideology is firmly rooted in transparency, with a belief in open-source, decentralized software developed in the public interest with inclusive decision-making. The core team of the project is based in Berlin. To solve the scalability problem, DTR has decided to develop the Unit-e with parameters very close to Bitcoin’s design, but many things will be improved. Gulia Fanti, DTR lead researcher and Assistant Professor of Electrical and Computer Engineering at Carnegie Mellon University, commented: “In the 10 years since Bitcoin first emerged, blockchains have developed from a novel idea to a field of academic research. Our approach is to first understand fundamental limits on blockchain performance, then to develop solutions that operate as close to these limits as possible, with results that are provable within a rigorous theoretical framework.” The launch of the Unit-e is planned for the second half of 2019. Researches from MIT, Stanford Set to Replace Bitcoin with Their Groundbreaking Crypto Project
Coinspeaker

BitPay CEO Says Bitcoin Is Solving Real Problems Around the World

BitPay co-founder and CEO, Stephen Pair, has recently commented that Bitcoin (BTC) is solving several issues around the world. He said that in a press release uploaded a […] The post BitPay CEO Says Bitcoin Is Solving Real Problems Around the World appeared first on UseTheBitcoin.
Use The Bitcoin

Trillion Dollar Market Cap, Ethereum Chain Splits & Stellar Lumens Fund - Crypto News

In this video, Mattie gives you the latest bitcoin and crypto news. He talks about the ethereum chain splitting, BitGo CEO Says Institutional Money in Crypto Can ‘Easily’ Reach Trillions of Dollars, and a new Stellar Lumens fund. This is a daily segment! ----------------------------------------------------------------------------------- CHECK OUT OUR PODCAST: https://bit.ly/2sZCAiF New episode every Monday and Friday! ----------------------------------------------------------------------------------- Check out Altcoin Buzz Ladies! https://www.youtube.com/channel/UCxulvI2C9wUvvDDNS7S35fA/videos ---------------------------------------------------------------------------------- Connect with us on Social Media: Twitter: https://bit.ly/2GDAoCp Facebook: https://bit.ly/2wYksLB Telegram: https://bit.ly/2IAqDuI ---------------------------------------------------------------------------------- Looking for the best cryptocurrency wallets? Check these out: BitLox: https://bit.ly/2rWQnHa CoolWallet S: https://bit.ly/2Liy5bv Trezor: https://bit.ly/2IXrZic Ledger Nano S: https://bit.ly/2IyE3al KeepKey: https://bit.ly/2x5TlhM Read about them here: https://bit.ly/2rTdthZ --------------------------------------------------------------------------------- References: Leading Crypto Asset Manager Grayscale Launches Stellar Lumens Trust https://www.altcoinbuzz.io/crypto-news/finance-and-funding/leading-crypto-asset-manager-grayscale-launches-stellar-lumens-trust/?fbclid=IwAR2AlAU_C_8Mm9CUm2hDci0pmdW3pvLzphS-BSy888SzDptaXMeifxZgJ1I Crypto Investment Firm Grayscale Launches Fund Dedicated to Stellar Lumens (XLM) https://www.cryptoglobe.com/latest/2019/01/crypto-investment-firm-grayscale-launches-fund-dedicated-to-stellar-lumens-xlm/ Grayscale Tweet https://twitter.com/GrayscaleInvest/status/1085904356635959297?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1085904356635959297&ref_url=https%3A%2F%2Fwww.altcoinbuzz.io%2Fcrypto-news%2Ffinance-and-funding%2Fleading-crypto-asset-manager-grayscale-launches-stellar-lumens-trust%2F Grayscale https://grayscale.co/stellar-lumens-trust/ BitGo CEO Says Institutional Money in Crypto Can ‘Easily’ Reach Trillions of Dollars As Company Launches Cold Storage Trading https://dailyhodl.com/2019/01/17/bitgo-ceo-says-institutional-money-in-crypto-can-easily-reach-trillions-of-dollars-as-company-launches-cold-storage-trading/ Crypto’s Billion-Dollar Theft Problem Prompts Safer Way to Trade https://www.bloomberg.com/news/articles/2019-01-16/crypto-s-billion-dollar-theft-problem-prompts-safer-way-to-trade Ethereum Chain Splits, An Estimated 10% of Miners Stay on Constantinople https://www.trustnodes.com/2019/01/17/ethereum-chain-splits-an-estimated-10-of-miners-stay-on-constantinople Ethereum Upgrade – Constantinople Hard Fork Delayed https://www.altcoinbuzz.io/crypto-news/product-release/ethereum-upgrade-constantinople-hard-fork-delayed/ VanEck to Nasdaq: Bitcoin Market Structure Expected to Improve in 2019 https://www.newsbtc.com/2019/01/17/vaneck-to-nasdaq-bitcoin-market-structure-expected-to-improve-in-2019/ Nasdaq Tweet https://twitter.com/Nasdaq/status/1085522054559031296?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1085522054559031296&ref_url=https%3A%2F%2Fwww.newsbtc.com%2F2019%2F01%2F17%2Fvaneck-to-nasdaq-bitcoin-market-structure-expected-to-improve-in-2019%2F -------------------------------------------------------------------------------- DISCLAIMER The information discussed on the Altcoin Buzz YouTube, Altcoin Buzz Ladies YouTube, Altcoin Buzz Podcast or other social media channels including but not limited to Twitter, Telegram chats, Instagram, facebook, website etc is not financial advice. This information is for educational, informational and entertainment purposes only. Any information and advice or investment strategies are thoughts and opinions only, relevant to accepted levels of risk tolerance of the writer, reviewer or narrator and their risk tolerance maybe different than yours. We are not responsible for your losses. Bitcoin and other cryptocurrencies are high-risk investments so please do your due diligence and consult the financial advisor before acting on any information provided. Copyright Altcoin Buzz Pte Ltd. All rights reserved.
Altcoin Buzz
By continuing to browse, you agree to the use of cookies. Read Privacy Policy to know more or withdraw your consent.