Alibaba news

Chinese multinational conglomerate specializing in e-commerce, retail, Internet, AI and technology

World latest news

Lower Cost with Higher Stability: How Do We Manage Test Environments at Alibaba?

A service-level reusable virtualization technology: Feature EnvironmentBy Lin Fan, Senior Engineer from Alibaba’s R&D Efficiency Team.Good code submission practices and proper pre-change checks can help reduce failures, but cannot completely eliminate the risk. Although adding multiple test environment copies can efficiently control the impact scope of failures, most enterprises have limited resources, resulting in a conflict between reducing test environment costs and improving test environment stability.To solve this problem, the innovative Alibaba R&D efficiency team designed an amazing service-level reusable virtualization technology called Feature Environment. In this article, we will focus on test environment management and cover this unique method used at Alibaba.PrefaceMany practices at Alibaba seem simple, but actually rely on many well thought out concepts, such as the management of test environments.Internet product services are usually composed of web apps, middleware, databases and many back-office business programs. A runtime environment is a small self-contained ecosystem. The most basic runtime environment is the online environment, which deploys the officially released version of the product to provide users with continuous and reliable services.In addition, many runtime environments that are not open to external users are also available, and are used for the routine development and verification of product teams. These environments are collectively referred to as test environments. The stability of the formal environment, apart from the quality of the software itself, is mainly related to the running infrastructures, such as the host and the network, while the stability of the test environment is more affected by human factors. Test environment failures are common due to frequent version changes and the deployment of unverified code.A good code submission habit and appropriate pre-change check can help reduce the occurrence of faults, but cannot eliminate them entirely. Increasing multiple test environment replicas can effectively control the impact scope of faults. However, enterprise resources are limited, so reducing the cost and improving the stability of the test environment have become two objectives to balance.In this field, the ingenious Alibaba R&D efficiency team has designed a service-level reusable virtualization technology, called “Feature Environments”, the idea of which is clever and impressive. This article focuses on the topic of test environment management to discuss this way of working with Alibaba characteristics.Difficulties in Test Environment ManagementTest environments are widely used. Common test environments, such as the system integration test environment, the user acceptance test environment, the pre-release test environment, and the phased test environment, reflect the delivery lifecycle of the product and, indirectly, the organization structure of the entire team.The test environment for a small workshop-style product team is very simple to manage. Each engineer can start the full suite of software components locally for debugging. If you still think this is not safe, it should be enough to add a public integrated test environment.As the product scales up, it becomes time consuming and cumbersome to start all service components locally. Engineers can only run some components to be debugged locally, and then use the rest of the components in the public test environment to form a complete system.Moreover, with the expansion of the team size, the responsibilities of each team member are further subdivided and new sub-teams are formed, which means that the communication cost of the project increases and the stability of the public test environment becomes difficult to control. In this process, the impact of the complexity of test environment management is not only reflected in the cumbersome service joint debugging, but also directly reflected in the changes in delivery process and resource costs.A significant change in the delivery process is the increase in the variety of test environments. Engineers have designed various dedicated test environments for different purposes. The combination of these test environments forms a unique delivery process for each enterprise. The following figure shows a complex delivery process for large projects.From the perspective of individual services, environments are connected by pipelines, coupled with levels of automated testing or manual approval operations, to realize the transfer between the environments. Generally, the higher the level of environment, the lower the deployment frequency, and therefore the higher the relative stability. On the contrary, in a low-level environment, new deployments may occur at any time, interrupting others who are using the environment. Sometimes, to reproduce some special problem scenarios, some developers have to log on to the server directly to perform operations, further affecting the stability and availability of the environment.Faced with a test environment that may collapse at any time, small enterprises try to adopt a method of “blocking”, that is, to restrict service change time and set up strict change specifications, while large enterprises are good at “unchoking”, that is, to increase replicas of the test environment to isolate the impact range of faults. Obviously, if the method of “blocking” is adopted, the situation of the overwhelmed test environment will definitely get worse and worse, which is the truth that has long been revealed in The legend of King Yu Tamed the Flood a thousand years ago. So, deliberate control cannot save the fragile test environment.In recent years, the rise of DevOps culture has freed developers’ hands end-to-end, but it is a double-edged sword for the management of the test environment. On the one hand, DevOps encourages developers to participate in O&M to understand the complete product lifecycle, which helps to reduce unnecessary O&M incidents. On the other hand, DevOps allows more people to access the test environment, so more changes and more hotfixes appear. From a global perspective, these practices have more advantages than disadvantages, but they cannot improve test environment stability. Simple process “unchoking” is also unable to save the fragile test environment.Then, what should be invested has to be invested. The low-level test environments used by different teams are made independent, so that each team sees a linear pipeline, and the shape of river convergence appears when viewed as a whole.Therefore, ideally, each developer should obtain an exclusive and stable test environment, and complete their work without interference. However, due to the costs, only limited test resources can be shared within the team in reality, and the interference between different members in the test environment becomes a hidden danger affecting the quality of software development. Increasing the number of replicas of the test environment is essentially a way to increase cost in exchange for efficiency. However, many explorers who try to find the optimal balance between the cost and the efficiency seem to go further and further along the same road of no return.Due to the objective scale and volume, Alibaba product teams are also susceptible to the above-mentioned troubles of managing the test environment.First Challenge: Management of Test Environment TypesIn Alibaba, there are also many types of test environments. The naming of various test environments is closely related to their functions. Although some names are commonly used in the industry, no authoritative naming standard has been formed. In fact, the name of the environment is only a form. The key lies in the fact that various test environments should be adapted to specific application scenarios respectively, and some differences should exist between scenarios more or less.Some of the differences lie in the types of services being run. For example, the performance test environment may only need to run the most visited key services related to stress testing, while it is only a waste of resources if it runs other services. Some differences lie in the source of the access data. For example, the data source of the developer test environment is definitely different from that of the formal environment, so the fake data used in the test does not contaminate online users’ requests. The pre-release environment (or the user acceptance test environment) uses the data source (or replicas of formal data sources) consistent with the formal environment, so as to reflect the operation of new functions on real data. The automated test-related environment has a separate set of test databases to avoid interference from other manual operations during testing.Some other differences lie in users. For example, both the phased environment and the pre-release environment use formal data sources, but the users in the phased environment are a small number of real external users, while the users in the pre-release environment are all internal personnel. In short, it is not necessary to create a test environment for a test scenario without business specificity.At the Group level, Alibaba has relatively loose restrictions on the form of the pipeline. Objectively, only the front-line development teams know what the best delivery process for the team should be. The Alibaba development platform only standardizes some recommended pipeline templates, on which developers can build. Several typical template examples are listed below:Here, several environment type names that are less common in the outside world appear and will be described in detail later.Second Challenge: Management of Test Environment CostsCost management problems are tricky and worth exploring. The costs related to the test environment mainly include the “labor costs” required to manage the environment and the “asset costs” required to purchase infrastructure. With automated and self-service tools, labor-related costs can be effectively reduced. Automation is also a big topic. It is advisable to discuss it in another article, so we will not delve into it here for the time being.The reduction of asset purchase costs depends on the improvement and progress of technology (excluding the factors of price changes caused by large-scale procurement), while the development history of infrastructure technology includes two major areas: hardware and software. The significant cost reduction brought by hardware development usually benefits from new materials, new production processes and new hardware design ideas. However, at present, the substantial decrease in infrastructure costs brought by software development is mostly due to the breakthrough of virtualization (that is, resource isolation and multiplexing) technology.The earliest virtualization technology is virtual machines. As early as the 1950s, IBM began to use this hardware-level virtualization method to improve the resource utilization exponentially. Different isolated environments on the virtual machine run complete operating systems respectively, so that the isolation is high and the universality is strong. However, it is slightly cumbersome for the scenario of running business services. After 2000, open-source projects, such as KVM and XEN, popularized the hardware-level virtualization.At the same time, another lightweight virtualization technology emerged. The early container technology, represented by OpenVZ and LXC, achieved the virtualization of the runtime environment built on the kernel of the operating system, which reduced the resource consumption of the independent operating system and obtained higher resource utilization at the expense of certain isolation.Later, Docker, with its concept of image encapsulation and single-process container, promoted this kernel-level virtualization technology to a high level sought after by millions of people. Following the pace of technological advancement, Alibaba began using virtual machines and containers very early. During the shopping carnival on “Singles’ Day” in 2017, the proportion of online business services being containerized reached 100%. The next challenge, however, is whether infrastructure resources can be used more efficiently.By getting rid of the overhead of hardware command conversion and operating systems for virtual machines, only a thin layer of kernel namespace isolation exists between programs and ordinary programs running in containers, which has no runtime performance loss at all. As a result, virtualization seems to have reached its limit in this direction. The only possibility is to put aside generic scenarios, focus on specific scenarios of test environment management, and continue to seek breakthroughs. Finally, Alibaba has found a new treasure in this area: the service-level virtualization.The so-called service-level virtualization is essentially based on the control of message routing to achieve the reuse of some services in the cluster. In the case of service-level virtualization, many seemingly large standalone test environments actually consume minimal additional infrastructure resources. Therefore, it is no longer a significant advantage to provide each developer with a dedicated test environment cluster.Specifically, the Alibaba delivery process includes two special types of test environments: the “shared basic environment” and the “feature environment”, which form a test environment usage method with Alibaba characteristics. The shared basic environment is a complete service runtime environment, which typically runs a relatively stable service version. Some teams use a low-level environment (called the “daily environment”) that always deploys the latest version of each service as the shared basic environment.The feature environment is the most interesting part of this method. It is a virtual environment. Superficially, each feature environment is an independent and complete test environment consisting of a cluster of services. In fact, apart from the services that some current users want to test, other services are virtualized through the routing system and message-oriented middleware, pointing to the corresponding services in the shared basic environment. In the general development process at Alibaba, development tasks needs to go through the feature branches, release branches, and many related links to be finally released and launched. Most environments are deployed from the release branch, but this kind of self-use virtual environments for developers are deployed from the version of the code feature branch. Therefore, it can be called the “feature environment” (it is called the “project environment” in Alibaba).For example, the complete deployment of a transaction system consists of more than a dozen small systems, including the authentication service, the transaction service, the order service, and the settlement service, as well as corresponding databases, cache pools, and message-oriented middleware. Then, its shared basic environment is essentially a complete environment with all services and peripheral components. Suppose that two feature environments are running at this time. One only starts the transaction service, and the other starts the transaction service, the order service and the settlement service. For users in the first feature environment, although all services except the transaction service are actually proxied by the shared basic environment, it seems that the transaction service owns a complete set of environments during use: You can freely deploy and update the transaction service version in the environment and debug it without worrying about affecting other users. For users in the second feature environment, the three services deployed in the environment can be jointly debugged and verified. If the authentication service is used in the scenario, the authentication service in the shared basic environment responds.Doesn’t this seem to be the routing address corresponding to the dynamically modified domain name, or the delivery address corresponding to the message subject? In fact, it is not that simple, because the route of the shared basic environment cannot be modified for a certain feature environment. Therefore, the orthodox routing mechanism can only realize one-way target control, that is, the service in the feature environment can actively initiate a call to ensure the correct routing. If the initiator of the request is in the shared basic environment, it is impossible to know which feature environment to send the request to. For HTTP requests, it is even difficult to handle callbacks. When a service in a shared basic environment is called back, domain name resolution targets the service with the same name in the shared basic environment.How can data be routed and delivered correctly in both ways? Let’s go back to the essence of the problem: Which feature environment the request should go into is relevant to the initiator of the request. Therefore, the key to implement two-way binding lies in identifying the feature environment in which the request initiator is located and conducting end-to-end routing control. This process is somewhat similar to “phased release” and can be solved using a similar approach.Thanks to Alibaba’s technical accumulation in the middleware field and the widespread use of tracing tools, such as EagleEye, it is easy to identify request initiators and trace callback links. In this way, the routing control is simple. When using a feature environment, the user needs to “join” the environment. This operation associates the user identification (such as, the IP address or the user ID) with the specified feature environment. Each user can only belong to one feature environment at a time. When the data request passes through the routing middleware (such as message queue, message gateway, and HTTP gateway), once it is identified that the initiator of the request is currently in the feature environment, the request is routed to the service in the environment. If the environment does not have the same service as the target, then the request is routed or delivered to the shared basic environment.The feature environment does not exist independently. It can be built on top of container technology for greater flexibility. Just as the convenience of infrastructure acquisition can be obtained by building a container on a virtual machine, in a feature environment, rapid and dynamic deployment of services through a container means that users can add a service that needs to be modified or debugged to the feature environment at any time, or destroy a service in the environment at any time, so that the shared basic environment can automatically replace it.Another problem is service cluster debugging.In conjunction with the way that the feature branch of AoneFlow works, if different service branches of several services are deployed to the same feature environment, real-time joint debugging of multiple features can be performed to use the feature environment for integration testing. However, even though the feature environment has a low creation cost, the services are deployed on the test cluster. This means that every time you modify the code, you need to wait for the pipeline to be built and deployed, saving the space overhead, but not shortening the time overhead.To further reduce costs and improve efficiency, the Alibaba team members have come up with another idea: to add local development machines to the feature environment. Within the Group, both the development machine and the test environment use the intranet IP address, so it is not difficult to directly route the requests from the specific test environment to the development machine with some modifications. This means that, even if a user in the feature environment accesses a service that actually comes from the shared basic environment, some of the services on the subsequent processing link can also come from the feature environment or even from the local environment. In this way, debugging services in the cluster becomes simple without waiting a long time for the pipeline to be built, as if the entire test environment runs locally.Building a Feature Environment by YourselfDo you think service-level virtualization is too niche and out-of-reach for ordinary developers? This is not the case. You can immediately build a feature environment yourself to give it a try.The Alibaba feature environment implements two-way routing service-level virtualization, which includes various common service communication methods, such as the HTTP call, the RPC call, the message queue, and the message notification. It can be challenging to complete such a fully functional test environment. From a general-purpose point of view, you can start with the most popular HTTP protocol and build a simple feature environment that supports one-way routing.To facilitate environment management, it is best to have a cluster that can run containers. In the open-source community, full-featured Kubernetes is a good choice. Some concepts related to routing control in Kubernetes are displayed as resource objects to users.Briefly, the Namespace object can isolate the routing domain of the service, which is not the same as the kernel Namespace used for container isolation. Don’t confuse these two. The Service object is used to specify the routing target and name of the service. The Deployment object corresponds to the actually deployed service. The Service object of type ClusterIP (NodePort and LoadBalancer types, which are ignored for the moment) can route a real service in the same Namespace, while the Service object of type ExternalName can be used as the routing proxy of the external service in the current Namespace. The management of these resource objects can be described by using files in YAML format. After learning about these, you can start to build the feature environment.The building process of infrastructure and Kubernetes cluster is skipped. Let’s get straight to the point. First, prepare a public infrastructure environment for routing backups, which is a full-scale test environment, and includes all services and other infrastructure in the tested system. External access is not considered for the moment. The corresponding Service objects of all services in the shared basic environment can use ClusterIP type, and assume that these objects correspond to the Namespace named pub-base-env. In this way, Kubernetes automatically assigns the domain name “service name.svc.cluster” and the cluster global domain name “service” available in the Namespace to each service in this environment. With the guarantee of a backup, you can start to build a feature environment. The simplest feature environment can contain only one real service (such as trade-service), and all other services are proxied to the public infrastructure environment by using the ExternalName type Service objects. Assuming that the environment uses the Namespace named feature-env-1, whose YAML is described as follows (information of non-key fields is omitted):kind: Namespacemetadata:name: feature-env-1________________________________________ kind: Servicemetadata:name: trade-servicenamespace: feature-env-1spec:type: ClusterIP...________________________________________kind: Deploymentmetadata:name: trade-servicenamespace: feature-env-1spec:...________________________________________kind: Servicemetadata:name: order-servicenamespace: feature-env-1spec:type: ExternalNameexternalName: Service...Note that the service, order-service, can be accessed by using the local domain name order-service.svc.cluster in the current feature environment Namespace, and the request will be routed to the global domain name configured by the Service, that is, it will be routed to the service with the same name in the public infrastructure environment for processing. Other services in the Namespace do not perceive this difference. Instead, they may assume that all related services are deployed in this Namespace.If the developer modifies the order-service during the development of the feature environment, the modified version should be added to the environment. All you need to do is modify the Service object property of order-service by using the patch operation of Kubernetes, change it to the ClusterIP type, and create a Deployment object in the current Namespace to associate it with.The modified Service object is only valid for services within the corresponding Namespace (that is, the corresponding feature environment), and cannot affect requests recalled from the public infrastructure environment, so the route is one-way. In this case, the feature environment must contain the portal service of the call link to be tested, and the service containing the callback operation. For example, the feature to be tested is initiated by the interface operation, and the service providing the user interface is the portal service. Even if the service has not been modified, its mainline version should be deployed in the feature environment.Through this mechanism, it is not difficult to implement the function that partially replaces the cluster service with the local service for debugging and development. If the cluster and the local host are both in the Intranet, just point the ExternalName type Service object to the local IP address and service port. Otherwise, you need to add a public network routing for the local service, and implement this function through dynamic domain name resolution.At the same time, Yunxiao is gradually improving the Kubernetes-based feature environment solution, which will provide more comprehensive routing isolation support. It is worth mentioning that, due to the particularity of the public cloud, it is a challenge that must be overcome to join the local host to the cluster on the cloud during joint debugging. Therefore, Yunxiao has implemented the method of joining the local LAN host (without a public IP address) to the Kubernetes cluster in a different Intranet for joint debugging through the routing capability of the tunnel network + kube-proxy. The technical details will also be announced in the near future on the official Yunxiao WeChat account, so stay tuned.SummaryWhile many people are still waiting for the arrival of the next wave of virtualization technology after virtual machines and containers, Alibaba has already provided an answer. The mentality of entrepreneurs makes people in Alibaba understand that they need to save every penny. In fact, it is not technology but imagination that limits innovation. The concept of service-level virtualization breaks through the traditional cognition of environmental replica, and resolves the balancing issue between cost and stability of test environments from a unique perspective.As a special technical carrier, the value of the feature environment lies not only in the lightweight test environment management experience, but also in bringing a smooth way of working for every developer, which is actually “minimal but not simple”.Experience is the best teacher. Alibaba Yunxiao has helped enormously with the methodology for dealing with cooperation on large products. Industrial tasks and technical challenges, such as agile and quick product iterations, huge amounts of hosted data, high effective testing tools, distributed second-level structure, and large scope cluster deployment release are contributions offered by the internal Alibaba group teams, ecosystem partners, and cloud developers. We sincerely welcome colleagues from across the industry to discuss and communicate with us.(Original article by Lin Fan林帆)Alibaba TechFirst hand and in-depth information about Alibaba’s latest technology → Facebook: “Alibaba Tech”. Twitter: “AlibabaTech”.Lower Cost with Higher Stability: How Do We Manage Test Environments at Alibaba? was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

How to Use Flutter for Hybrid Development: Alibaba’s Open Source Code Instance

How Alibaba’s Xianyu enables hybrid integration of Flutter to existing Native apps through incremental migration with Flutter BoostThis article is part of Alibaba’s Utilizing Flutter series.Apps of a certain scale usually have a set of mature and universal fundamental libraries, especially for apps in the Alibaba system, which generally rely on many fundamental libraries in the system. The cost and risk of using Flutter to re-develop an app from scratch are high. Therefore, incremental migration in Native apps is a robust way for Flutter technology to be applied in existing Native apps.The tech team from Xianyu (闲鱼), Alibaba’s second-hand trading platform, has developed a unique hybrid technology solution in this practice.Status Quo and ThoughtsThe hybrid solution currently used by Xianyu is to share the same engine. This solution is based on the fact that only one page at most can be seen at any time. Several ViewControllers can be seen in some specific scenarios, but these scenarios are not discussed here.We can simply understand this solution in this way: We regard the shared Flutter View as a canvas, and then use a Native container as the logic page. Every time we open a container, we use the communication mechanism to notify the Flutter View to render the current logic page, and then put the Flutter View into the current container.This solution cannot support multiple horizontal logic pages at the same time, because you must perform operations from the top of the stack during page switching, and cannot perform horizontal switching while maintaining the status. For example, for two pages A and B, B is currently at the top of the stack. To switch to A, you need to pop B out from the top of the stack. At this time, the status of B is lost. If you want to switch back to B, you can only re-open B and the status of the page cannot be maintained.And, during the pop process, the official Flutter Dialog may be mistakenly killed. In addition, stack-based operations rely on an attribute modification to the Flutter framework, which makes this solution invasive.FlutterBoost: A New Generation Hybrid Technology SolutionFlutter Boost project is already made open source in GitHub. Check it out through this link: PlanWhen Xianyu promotes Flutter, more complex page scenarios have gradually exposed the limitations and problems of the old solution. So we have launched a new hybrid technology solution codenamed FlutterBoost (a nod to the C++ Boost library). The main objectives of the new hybrid solution are as follows:Reusable universal hybrid solutionSupport for more complex hybrid modes, such as support for homepage TabNon-invasive solution: The solution of modifying Flutter is no longer relied uponSupports for universal page lifecycleUnified and clear design conceptsSimilar to the old solution, the new solution still adopts the shared engine mode. The main idea is that a Native container uses messages to drive a Flutter page container, thus achieving the synchronization between the Native container and the Flutter container. We hope that the content rendered by Flutter is driven by the Naitve container.Simply put, we want to make the Flutter container into a browser. We enter a page address, and then the container manages the page rendering. On the Native side, we only need to consider how to initialize the container, and then set the corresponding page flag of the container.Main ConceptsNative LayerContainer: Native container, platform Controller, Activity, and ViewControllerContainer Manager: manager of the containerAdaptor: Flutter is the adaptation layerMessaging: Channel-based message communicationDart LayerContainer: The container used by Flutter to hold widgets, specifically implemented as the derived class of Navigator.Container Manager: To manage Flutter containers and provide APIs, such as Show and Remove.Coordinator: The coordinator that receives Messaging messages and is responsible for calling the status management of the Container Manager.Messaging: Channel-based message communicationUnderstanding of PagesThe object and concept of the page expressed in Native and Flutter are inconsistent. In Native, a page is generally expressed as a ViewController and an Activity. However, in Flutter, a page is expressed as a widget. We want to unify the concept of pages, or weaken and abstract away the concept of pages corresponding to the widgets in Flutter. In other words, when a Native page container exists, FlutterBoost ensures that a widget is used as the container content. Therefore, the Native container should prevail when we understand and perform routing operations. The Flutter Widget depends on the status of the Native page container.Then, when talking about pages in the FlutterBoost, we refer to the Native container and its affiliated widgets. All page routing operations, as well as opening or closing pages, are actually direct operations on Native page containers. No matter where the routing request comes from, it will eventually be forwarded to Native to implement the routing operation. This is also the reason why the Platform protocol needs to be implemented when FlutterBoost is accessed.On the other hand, we cannot control the service code to push new widgets through the Navigator of the Flutter itself. If the service uses Navigator directly to operate widgets without using FlutterBoost, including non-full screen widgets, such as Dialog, we recommend that the service itself manages its status. This type of widget does not belong to the page defined by FlutterBoost.Understanding the page concept here is critical to understanding and using FlutterBoost.Main Differences from the Old SolutionWe mentioned earlier that the old solution maintains a single Navigator stack structure at the Dart layer for widget switching. The new solution introduces the Container concept on the Dart side. Instead of using stack structure to maintain existing pages, all current pages are maintained in the form of flat key-value mapping, and each page has a unique ID. This structure naturally supports the search and switching of pages, and is no longer subject to the top stack operation. Therefore, some previous problems caused by pop can be solved. Moreover, the page stack operation does not need to be performed by modifying the Flutter source code, eliminating the intrusiveness of the implementation.In fact, the Container we introduced is the Navigator, that is, a Native container corresponds to a Navigator. How does this work?Implementation of Multiple NavigatorsFlutter provides an interface for customizing Navigators at the underlying layer, and we have implemented an object for managing multiple Navigators. Currently, only one visible Flutter Navigator is available at most. The page contained in this Navigator is the page corresponding to the currently visible container.Native containers and Flutter containers (Navigators) correspond to each other one by one, and their lifecycles are also synchronized. When a Native container is created, a Flutter container is also created, and they are linked by the same ID. When the Native container is destroyed, the Flutter container is also destroyed. The status of the Flutter container is dependent on the Native container, which is what we call Native-driven. The Manager centrally manages and switches the containers currently displayed on the screen.Let’s use a simple example to describe the process of creating a new page:Create a Native container (iOS ViewController, Android Activity or Fragment).The Native container notifies the Flutter Coordinator through the message mechanism that a new container is created.The Flutter Container Manager is then notified to create the corresponding Flutter container, and load the corresponding widget page in it.When the Native container is displayed on the screen, the container sends a message to Flutter Coordinator notifying the ID of the page to be displayed.The Flutter Container Manager finds the Flutter Container of the corresponding ID, and sets it as a visible container on the foreground.This is the main logic for creating a new page. Operations, such as destroying and entering the background, are also driven by Native container events.Officially Proposed Hybrid SolutionHow It WorksThe Flutter technology chain is mainly composed of the Flutter Engine implemented by C++ and the Framework implemented by Dart (the compilation and construction tools is not discussed here). The Flutter Engine is responsible for thread management, Dart VM status management, and Dart code loading. The Framework implemented by Dart code is the main API that services are exposed to. Concepts, such as widgets, are the content of the Framework at the Dart level.At most, only one Dart VM can be initialized in a process. However, a process can have multiple Flutter Engines, and multiple Engine instances share the same Dart VM.Let’s take a look at the specific implementation. Every time a FlutterViewController is initialized on iOS, an engine will be initialized, which means a new thread (theoretically, the thread can be reused) will run the Dart code. Similar effects can be achieved for Activity like Android. If multiple Engine instances are started, note that the Dart VM is still shared, but the code loaded by different Engine instances runs in their independent Isolate.Official RecommendationsDeep Engine SharingIn terms of hybrid solutions, we have discussed with Google and come up with some possible solutions. The official recommendation for Flutter is that, in the long run, the ability to support multi-window rendering in the same engine should be supported. At least logically, FlutterViewController shares the resources of the same engine. In other words, we want all the rendering windows to share the same master Isolate.However, the official long-term recommendation are not currently well supported.Multi-Engine ModeThe main problem to solve in the hybrid solution is how to deal with the alternate Flutter and Native pages. Google engineers provide a Keep It Simple solution: For continuous Flutter pages (widgets), only the current FlutterViewController needs to be opened. For the alternate Flutter pages, a new engine is initialized.For example, let’s perform the following navigation operations:Flutter Page1 -> Flutter Page2 -> Native Page1 -> Flutter Page3We only need to create different Flutter instances in Flutter Page1 and Flutter Page3.The advantage of this solution is that it is easy to understand and logically sound, but potential problems also exist. If a Native page and a Flutter page are alternated all the time, the number of Flutter Engines increases linearly, and the Flutter Engine itself is a heavy object.Problems of the Multi-Engine ModeRedundant resources: In the multi-engine mode, the Isolates between each engine are independent of each other. Logically, this does not cause any harm, but the underlying layer of the engine actually maintains the image cache and other memory-consuming objects. Imagine that each engine maintains its own image cache, which can be very memory intensive.Plug-in registration: Plug-ins rely on Messenger to transmit messages, while Messenger is currently implemented by FlutterViewController (Activity). If you have multiple FlutterViewControllers, the registration and communication of plug-ins will become chaotic and difficult to maintain, and the source and target of message transmission will become uncontrollable.Page differences between Flutter Widget and Native: Flutter pages are widgets, and Native pages are VC. Logically, we want to eliminate the differences between the Flutter page and the Native page. Otherwise, unnecessary complexity appears when we perform page tracking and other unified operations.Increased complexity of communication between pages: If all Dart code runs in the same engine instance and they share an Isolate, a unified programming framework can be used for inter-widget communication. And, multi-engine instances also make this case more complex.Therefore, we have not adopted the multi-engine hybrid solution in comprehensive consideration.SummaryCurrently, FlutterBoost has been supporting all Flutter-based development services on the Xianyu client in the production environment, providing support for more complex hybrid scenarios, and stably providing services for hundreds of millions of users.From the very beginning of the project, we hoped that FlutterBoost could solve the general problem of Native App hybrid mode access to Flutter. So we have made it a reusable Flutter plug-in, hoping to attract more interested people to participate in the construction of the Flutter community. In this limited space, we shared the experience and code accumulated by Xianyu in the Flutter hybrid technology solution. People who are interested are welcome to actively communicate with us.Extension and SupplementPerformanceWhen switching between two Flutter pages, we only have one Flutter View, so we need to save screenshots of the previous page. If the Flutter page contains multiple screenshots, it will occupy a large amount of memory. Here, we adopt the file memory L2 cache policy, in which only 2–3 screenshots are saved at most, and the rest of the written files are loaded on demand. In this way, we can maintain a stable level in the memory while ensuring the user experience.In terms of page rendering performance, the advantages of Flutter AOT are obvious. During fast page switching, Flutter can switch corresponding pages sensitively, logically creating a sense of Flutter with multiple pages.Support for Release1.0At the beginning of the project, we developed based on the Flutter version currently used by Xianyu, and then conducted the Release 1.0 compatibility upgrade test. No problems have been found so far.AccessAny project integrated with Flutter can easily introduce FlutterBoost as a plug-in in an officially dependent way, and only a small amount of code access is required for the project to complete the access. For detailed access documentation, see the official project documentation on the GitHub homepage.The Flutter Boost project is already made open source in GitHub. Check it out through this link: article by Chen Jidong陈纪栋)Alibaba TechFirst hand and in-depth information about Alibaba’s latest technology → Facebook: “Alibaba Tech”. Twitter: “AlibabaTech”. to Use Flutter for Hybrid Development: Alibaba’s Open Source Code Instance was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Amazon & Alibaba Trade Blows in War for China’s E-commerce Throne

Trade war or not, Amazon intends to challenge Alibaba’s dominance in China’s e-commerce industry. The Jeff Bezos-led company is on a hiring spree in China. Not to be outdone, Alibaba plans to raise up to $20 billion through a listing in Hong Kong. By CCN Markets: The war for China’s e-commerce marketplace is heating up, and it may result in a clash between two titans of industry: US powerhouse Amazon and homegrown champion Alibaba. Both Alibaba (NYSE: BABA) and Amazon (NASDAQ: AMZN) have had new developments emerge this week which signal increased competition in China’s e-commerce industry, despite the looming The post Amazon & Alibaba Trade Blows in War for China’s E-commerce Throne appeared first on CCN Markets

Around The Block With Jeff and Dave – June 4, 2019 – Yahoo’s Cryptocurrency Exchange, JP Morgan and Alibaba, and the all new Ledger Nano X Reviewed. Join Us!

In this episode, we review what JP Morgan and Alibaba are working on, how large enterprises are using blockchain and what this means for ICO/IEO coins. Yahoo! has also gone live with a new cryptocurrency exchange – what does this mean for the broader adoption of coins? Finally, we take an in-depth look at theRead MoreRead More. The post by Around The Block With Jeff And David appeared first on BTCManager, Bitcoin, Blockchain & Cryptocurrency News\
BTC Manager

Jack Ma’s Alibaba May Pursue Hong Kong Listing Amid US-China Tension

By CCN: As U.S.-China trade war tensions escalate, Alibaba Group is toying around with the idea of raising $20 billion via a Hong Kong listing, Bloomberg reports citing unnamed sources due to the privacy of the matter. Chinese tech leader Alibaba, of which Jack Ma is the founder and chairman, debuted on the New York Stock Exchange in September 2014 and raised $25 billion – the largest IPO in history. The deal was bigger than the IPOs of internet giants Facebook, Google, and Twitter combined. The sources cited by Bloomberg claim that Alibaba, an e-commerce behemoth, is opting to take this The post Jack Ma’s Alibaba May Pursue Hong Kong Listing Amid US-China Tension appeared first on CCN
More news sources

Alibaba news by Finrazor


Hot news

Hot world news

What does it take to be part of the next wave of Bitcoin Billionaires? Tim Draper answers

As crypto ecosystem consistently redefines its new peak in terms of adoption, fiat investors and new players are seeking opportunities to be a part of the next wave of Bitcoin Billionaires. One of the early birds, Tim Draper leads this space in terms of making sizable investments and returns in Bitcoin. Having complete faith in Bitcoin’s […] The post What does it take to be part of the next wave of Bitcoin Billionaires? Tim Draper answers appeared first on AMBCrypto.

Bitcoin Law Review - Blockstack's Reg A+, CFTC vs Bitmex, Gov't vs Libra/Crypto by @ToneVays Topic 1: Reg A+ & Blockstack Leads into Topic 1a on Broker-Dealers & Custody Topic 2: Government vs Libra/Crypto Topic 3: Crypto Exchanges Topic 3a - Update on Bitfinex vs NYAG Topic 4: Crypto & Taxes (Time Permitting) Topic 5: Other - Time Permitting Closing Moment of Zen: Honorable Mention: Please Support via Affiliate Codes: Unlimited Trading for $9 a Month at LVL: Deribit to save 10% on Trading: Buy/Sell Bitcoin at Paxful: Trading View: TorGuard VPN 50% off code & link = tone50: Ugly's Lifetime Subscription: Audio Podcast: See Regulation overview in each state here: Tone Vays is available for Corporate Consulting at the rate of 0.3 btc per hour. Please email for additional info.
Tones Vays

Reminder: Bakkt is Launching Bitcoin Futures in the Coming Day

Bakkt is Finally Here That’s right, Bakkt is finally here. After months upon months of deliberation, hype, and odd regulatory setbacks, the cryptocurrency venture that has been backed by the New York Stock Exchange, Microsoft, and Starbucks is launching. Starting Monday, July 22nd, the exchange will be testing physically delivered Bitcoin futures, which will be one of the first product of its kind to be regulated in U.S. markets. It is currently unclear who will be testing the product, or in which way the contract and custody solution will be tested. But, this development marks a huge step in the right direction for the cryptocurrency market. Bakkt confirmed the launch date for its testing period at a recent summit that was held in the New York Stock Exchange, whose chief executive is wed to the head of Bakkt. Per first-hand recounts of those in attendance, the cryptocurrency startup has also confirmed that it will be fully launching its Bitcoin futures product by the end of Q3, should nothing go wrong during testing of course. A Catalyst for Bitcoin & Crypto Growth In a recent Fundstrat Global Advisors research note posted to Twitter, Sam Doctor of the market research firm explained his thoughts on the conference. Citing the buzz being emanated by the over 150 investors and institutions in attendance, Doctor argues that there is “institutional anticipation” for the exchange’s Bitcoin futures. He expounded: “As we have written before, Bakkt tackles many of the barriers to adoption for traditional investors seeking to expand their mandate to include crypto.” Doctor adds that “appears to be a critical mass of adopters ready to come on board on Day 1 of the Bakkt launch”, noting that the firm’s sales team is starting to ramp up discussions with everyone from brokers and market makers. He thus confirms that should the hype translate into actual investment, the long-expected launch of the Bitcoin product, which will give many institutions their first taste of so-called “physical” BTC, could be a “huge” catalyst for the growth of this already budding market. Institutions Are Buzzing Per Placeholder’s Chris Burniske, the venture capitalist author of industry primer “Crypto Asset”, the overall feel of the room was rather bullish. He wrote the following, making the case that Wall Street has its eye on the cryptocurrency space once again. All in all, the @Bakkt event signals great things for #bitcoin and #crypto at large, even if I did miss some of the funk of OG days.— Chris Burniske (@cburniske) July 18, 2019 Title Image Courtesy of Samson Creative Via Unsplash The post Reminder: Bakkt is Launching Bitcoin Futures in the Coming Day appeared first on Ethereum World News.
Ethereum World News
By continuing to browse, you agree to the use of cookies. Read Privacy Policy to know more or withdraw your consent.