TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

969

Building Cloud-Native Microservices with Kotlin/Ktor (Observability Edition) Hello. My name is Narazaki from the Woven Payment Solution development group. At Woven by Toyota , we are involved in the backend development of the payment infrastructure used in Toyota Woven City , and we are developing it using Ktor , a web framework made with Kotlin. These backend applications run on City Platform, a Kubernetes-based infrastructure used in Woven City, forming the foundation of our microservices. In this article, I would like to introduce some pain points of microservices when configuring a microservices architecture and some tips for improving observability, which is essential to resolving those pain points, using as examples Ktor, a web framework that we use, and Kubernetes as a platform for hosting microservices. In addition to Kubernetes, I would like to introduce a so-called "cloud native" technology stack. This time, I will use Loki as a log collection tool , Prometheus as a metrics collection tool, and Grafana as a visualization tool . I hope this will be useful not only for those who are actually developing microservices using Java or Kotlin, but also for developers who are planning to introduce microservices and Kubernetes, regardless of the programming language they use. Instructions on how to replicate these steps, along with sample code, are provided at the end of this post . If you have time, please give it a try! First: The Challenges of Microservices Generally speaking, by adopting microservices, various problems of monolithic applications can be resolved, but on the other hand, the complexity of the application increases, making it very difficult to isolate problems when they occur. Here, we will consider three specific pain points. Pain Point 1: It is not clear when and which service caused the error. Pain Point 2: The operation status of dependent services must always be taken into consideration. Pain point 3: It is difficult to isolate resource-related performance degradation. By improving observability, we can tackle these challenges. In this post, I’ll show how we can implement solutions for each pain point using Ktor as an example. The approach involves introducing just three plugins and adding a few lines of code . The three Ktor plugins that we are introducing Solution 1. Introducing CallId In this solution, I will create two services that frequently call APIs within the same cluster, as is common in microservices. Let's see how the logs are captured in this environment. sequenceDiagram participant User as External user (outside the cluster) participant A as Frontend Service participant B as Backend Service User->>A: /Call request Note over User,A: Requests from outside the cluster A->>B: / Request Note over A,B: Pass the result from Frontend to Backend B-->>A: / Response Note over B,A: Return the result processed by Backend A->>User: /call response Logs will be output to standard output and collected by a log collection tool (Loki, in this case) deployed separately on Kubernetes. The services will be referred to as the caller (frontend) and the callee (backend). When monitoring, you may be able to see what is happening on each server by specifying the pod name, etc., using a logging platform, but requests across servers cannot be viewed in relation to each other. Especially as the number of requests increases, it becomes very difficult to isolate which application logs are related simply by displaying logs in chronological order. When a large number of requests come in, it becomes unclear which requests and responses are related... The mechanism that associates causally related events across servers over the network is called distributed tracing. In general, if you use a service mesh like Istio, you can visualize related requests with tools like Zipkin and Jaeger, making it intuitive to understand where errors occured. On the other hand, it is not very convenient to use when troubleshooting application logs, such as searching for keywords in the logs. This is where Ktor's CallId comes into play. With this feature, you can search and view specific logs by using CallId as a keyword on the logging platform. Also, since there is no need to configure the network layer, it is flexible and can be completed by the application engineer without having to introduce a service mesh or similar. Let's actually run the application and check the logs in Grafana. In this example, we will prepare the same container image for both the frontend and backend, so we only need to generate one project. Follow these steps to generate the source code from the template. dependencies { implementation("io.ktor:ktor-server-call-logging-jvm:$ktor_version") implementation("io.ktor:ktor-server-call-id-jvm:$ktor_version") implementation("io.ktor:ktor-server-core-jvm:$ktor_version") } The necessary libraries are referenced as shown above. The logging-related part of the generated code should be modified as follows. (Comments are included to explain each line; no further modifications are necessary.) fun Application.configureMonitoring() { install(CallLogging) { level = Level.INFO filter { call -> call.request.path().startsWith("/") } // Specify the conditions under which logs will be output callIdMdc("call-id") // By setting this, the value can be embedded in the %X{call-id} part of logback.xml } install(CallId) { header(HttpHeaders.XRequestId) // Specify which header will store the ID value verify { callId: String -> callId.isNotEmpty() // Verify if a value exists } + generate { + UUID.randomUUID().toString() // If not, generate and embed a new value + } } In the HTTP client implementation, it’s recommended to set the header with this value so that the same CallId propagates across requests. Add the following dependencies to verify that the CallId propagates correctly between servers. dependencies { ... + implementation("io.ktor:ktor-client-core:$ktor_version") + implementation("io.ktor:ktor-client-cio:$ktor_version") ... } routing { + get("/call") { + application.log.info("Application is called") + val client = HttpClient(CIO) { +   defaultRequest { + header(HttpHeaders.XRequestId, MDC.get("call-id")) + } + } + val response: HttpResponse = client.get("http://backend:8000/") + call.respond(HttpStatusCode.OK, response.bodyAsText()) + } Once you’re able to build and deploy using the sample code below , try running the following commands to make API calls: curl -v localhost:8000/ curl -v -H "X-Request-Id: $(uuidgen)" localhost:8000/call With this setup, the CallIds now propagates between servers, allowing it to be used as a searchable keyword Even if you don't enter a value in the header, the CallId value will be added to the log. Also, if you search for the UUID value generated by this command, you will be able to correlate events on multiple servers. Solution 2. Setting Up Liveness and Readiness Probes In Kubernetes, liveness and readiness probes are mechanisms that communicate the application’s health status to the control plane. You can refer to this Google article for more information on each. Liveness Probe: Reports the container’s own health status. Readiness Probe: Reports whether the application, including to dependent services, is ready for operation, accessible through APIs. By setting these, you can efficiently recycle containers that have failed to start, or control traffic so that containers that have failed to start are not accessed. Let's implement these with Ktor. Here, no libraries are needed. The implementation policy is that the liveness probe is to inform Kubernetes of its own aliveness status, so it's fine to just return OK to the request. The readiness probe will send pings to dependent services and connected databases. To handle cases where responses aren’t received in time, set a request timeout. routing { ... get("/livez") { call.respond("OK") // Simply returns a 200 status to indicate the web server is running } get("/readyz") { // Implement pings to the DB or other dependent services based on the application’s requirements  // You can set request timeouts for SQL Client or HTTP Client to ensure connection are made within the expected time call.respond("OK") } } You need to tell the Kubernetes control plane that these API endpoints exist. Add the following to the Deployment definition. This configuration also allows you to set the time needed for the application to be ready to process requests, which will prevent false detections even if the initial startup takes longer. ... livenessProbe: httpGet: path: /livez port: 8080 readinessProbe: httpGet: path: /readyz port: 8080 initialDelaySeconds: 15 # The readiness probe will start checking 15 seconds after the container starts; default is 0 periodSeconds: 20 # Runs every 20 seconds timeoutSeconds: 5 # Expected to return results within 5 seconds successThreshold: 1 # Considered successful after one success failureThreshold: 3 # If it fails three consecutive times, the pod will restart ... With this, the setup is complete. You can test the behavior by adding a sleep command within the endpoint or by adjusting these parameters. Also, although this is only a reference this time, we recommend building a system to notify you using Prometheus's Alertmanager or similar if an abnormality is detected. Solution 3. Configuring Micrometer By implementing the first two solutions, observability should be significantly improved. While Kubernetes allows monitoring at the Pod and Node levels, runtime-level monitoring within the application is still limited. Generally, Kotlin applications run on the JVM, allowing you to monitor runtime performance by tracking CPU and memory usage, as well as garbage collection behavior on the JMV. This helps detect unintended runtime-related performance degradation. So, how should we approach this in a microservices architecture? In a monolith, it should be relatively simple to implement by installing an agent on the server where it will run. On the other hand, in Kubernetes, where containers are repeatedly created and destroyed, installing an agent is not very practical. Ktor provides a plugin for Micrometer , the de facto standard in the Java ecosystem for collecting metrics, which can be integrated with Prometheus for monitoring. When creating a project from the template described above, the following packages and source code will be added to the project. implementation("io.ktor:ktor-server-metrics-micrometer-jvm:$ktor_version") implementation("io.micrometer:micrometer-registry-prometheus:$prometeus_version") val appMicrometerRegistry = PrometheusMeterRegistry(PrometheusConfig.DEFAULT) install(MicrometerMetrics) { registry = appMicrometerRegistry } routing { get("/metrics-micrometer") { call.respond(appMicrometerRegistry.scrape()) } } By specifying these in Kubernetes configuration files, Prometheus will automatically scrape the endpoints and collect the data. kind: Service metadata: name: backend namespace: sample + annotations: + prometheus.io/scrape: 'true' + prometheus.io/path: '/metrics-micrometer' + prometheus.io/port: '8080' Additionally, by adding a Grafana dashboard from the marketplace , you can easily visualize JVM performance metrics, improving the transparency of your application. You can simply copy and paste the dashboard ID from the marketplace to register it This setup allows you to display memory, CPU, garbage collection, and other metrics on a per-pod basis In addition, by monitoring how much CPU and memory an application is using at any given time from these metrics and setting the CPU resources for containers, you can improve the efficiency of resource usage across the Kubernetes cluster. (Setting these resources is also necessary to ensure proper scaling of the application.) resources: requests: memory: "512Mi" cpu: "500m" limits: memory: "512Mi" cpu: "750m" lastly I hope you have seen that Ktor is a plugin-based web framework that can improve non-functional requirements without significantly changing the behavior of existing applications. In complex systems, a single oversight can lead to untraceable issues, where hypotheses about bugs can’t be verified and debugging turns into a maze. Regardless of the architecture, it’s important to continuously reduce blind spots to prepare for potential issues. I hope that this article has provided you with an introduction to the observability features of web frameworks for microservice applications. If you are considering adopting microservices in the future and are unsure of which framework to choose, you should also consider whether these features are available when selecting a technology. There are also other best practices for building and smoothly operating microservices, such as implementing GitOps, managing inter-service authentication and authorization, and load balancing, which I hope to cover in a future post. Finally, we are hiring for a variety of positions . If you’re interested, feel free to start with a casual chat. (Reference) Environment Setup and Sample Code To replicate this setup in your own, you’ll need a Java runtime environment, Docker Desktop with Kubernetes enabled, and Helm . These have been tested on Mac/Linux. (Windows users, please use WSL2.) This article assumes Kubernetes is running locally. If it’s in the cloud, adjust accordingly. In this article, we used Loki for log collection, Prometheus for metrics collection, and Grafana for visualization. The source code is created from scratch using a template, and the Docker image is built using Jib as a Gradle build task. In the following example, we will run the build task in Gradle using Kotlin Script(.kts). We also recommend installing a tool called Skaffold to automate Docker tagging and Kubernetes deployment for your container cluster. helm repo add grafana https://grafana.github.io/helm-charts helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update helm install prometheus prometheus-community/prometheus -n prometheus --create-namespace helm install loki grafana/loki-stack -n grafana --create-namespace helm install grafana grafana/grafana -n grafana export POD_NAME=$(kubectl get pods --namespace grafana -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=grafana" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace grafana port-forward $POD_NAME 3000 # Open another terminal to keep this command running after execution. Kubectl get secret --namespace grafana grafana -o jsonpath="{.data.admin-password}" | base64 --decode # | pbcopy # For Mac users, uncomment this line to copy the password to the clipboard. Now, access Grafana in your browser at http://localhost:3000 . Use the user ID: admin and enter the password output from the last command to login. Configure each data source as follows: Loki: http://loki:3100 Prometheus: http://prometheus-server.prometheus.svc.cluster.local This completes the monitoring setup. For the code, create a new Ktor application from a template in InetelliJ. Select the following from IntelliJ. If you’re using VS Code, you can download it from this site . In this example, we will prepare the same container image for both the frontend and backend, so we only need to generate one project. Add the following Jib configuration for building with Docker. Then, confirm that you can build by running the Jib Gradle task ./gradlew jibDockerBuild . plugins { application kotlin("jvm") version "1.8.21" id("io.ktor.plugin") version "2.3.1" + id("com.google.cloud.tools.jib") version "3.3.1" } ... + jib { + from { + platforms { + platform { + architecture = "amd64" + os = "linux" + } + } + } + to { + image = "sample-jib-image" + tags = setOf("alpha") + } + container { + jvmFlags = listOf("-Xms512m", "-Xmx512m") + mainClass = "com.example.ApplicationKt" + ports = listOf("80", "8080") + } +} Let's change the log level of Logback so that we can keep an eye on the logs we added this time. Also, to avoid noise, we’ll hide the monitoring endpoints. - <root level="trace"> + <root level="info"> install(CallLogging) { level = Level.INFO - filter { call -> call.request.path().startsWith("/") } + filter { call -> !arrayOf("/livez", "/readyz", "/metrics-micrometer") + .any { it.equals(call.request.path(), ignoreCase = true) }} callIdMdc("call-id") } Once you have added this to the source, the container image will be deployed to Kubernetes with the following command and the application will be executed. Check Grafana to see if logs and metrics are being streamed correctly. Since the services.yaml file is a bit lengthy, it’s provided at the very end. ./gradlew jibDockerBuild && kubectl apply -f services.yaml # Update the Docker tag with each build # If you have Skaffold installed, you can use the following commands: skaffold init # Generates yaml files skaffold run # Builds and deploys the application once skaffold dev # Continuously builds and deploys each time you update the source code Including portForward in the Skaffold file makes it convenient to access the application at localhost:8000 automatically. apiVersion: skaffold/v4beta5 kind: Config metadata: name: observability build: artifacts: - image: sample-jib-image - buildpacks: # Remove this as it slows down the build - builder: gcr.io/buildpacks/builder:v1 + jib: {} # Make sure JAVA_HOME is set to the correct PATH to avoid execution errors. manifests: rawYaml: - service.yaml +portForward: + - resourceType: service + resourceName: frontend + namespace: sample + port: 8000 + localPort: 8000 apiVersion: v1 kind: Namespace metadata: name: sample --- apiVersion: apps/v1 kind: Deployment metadata: name: backend-deployment namespace: sample spec: replicas: 2 selector: matchLabels: app: backend template: metadata: labels: app: backend spec: containers: - name: backend image: sample-jib-image:alpha imagePullPolicy: IfNotPresent ports: - containerPort: 8080 # Comment out until implementing the liveness and readiness probes # livenessProbe: # httpGet: # path: /livez # port: 8080 # initialDelaySeconds: 15 # periodSeconds: 20 # timeoutSeconds: 5 # successThreshold: 1 # failureThreshold: 3 # readinessProbe: # httpGet: # path: /readyz # port: 8080 resources: requests: memory: "512Mi" cpu: "500m" limits: memory: "512Mi" cpu: "750m" --- apiVersion: v1 kind: Service metadata: name: backend namespace: sample annotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics-micrometer' prometheus.io/port: '8080' spec: selector: app: backend ports: - protocol: TCP port: 8000 targetPort: 8080 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: frontend-deployment namespace: sample spec: replicas: 2 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: containers: - name: frontend image: sample-jib-image:alpha imagePullPolicy: IfNotPresent ports: - containerPort: 8080 # Comment out until implementing the liveness and readiness probes # livenessProbe: # httpGet: # path: /livez # port: 8080 # initialDelaySeconds: 15 # periodSeconds: 20 # timeoutSeconds: 5 # successThreshold: 1 # failureThreshold: 3 # readinessProbe: # httpGet: # path: /readyz # port: 8080 resources: requests: memory: "512Mi" cpu: "500m" limits: memory: "512Mi" cpu: "750m" --- apiVersion: v1 kind: Service metadata: name: frontend namespace: sample annotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics-micrometer' prometheus.io/port: '8080' spec: selector: app: frontend ports: - protocol: TCP port: 8000 targetPort: 8080 type: LoadBalancer Thank you for following along this far. Let’s delete the resources created in this blog using the following commands. skaffold delete docker rmi $(docker images | grep 'sample-jib-image') # Kubectl delete all --all -n sample # If you didn’t use skaffold helm uninstall grafana -n grafana helm uninstall loki -n grafana helm uninstall prometheus -n prometheus
アバター
Introduction Hi, we're Yao, Bahng, and Lai from the Global Development Division. We're mobile app engineers, usually developing Global KINTO App . A few months ago, we investigated Kotlin Mltiplatform Mobile (KMM) as a preliminary activity for the future of Global KINTO App. See our previous article on this. The results of our previous investigation indicate that KMM is an excellent solution for rapid product development. Now that KMM has been revealed as a new approach compatible with Compose UI, we decided to investigate it further. Based on that investigation, this article discusses app development using Kotlin Multiplatform Mobile (KMM) and Compose Multiplatform. Before getting into the main topic, let's first clarify three points: What is KMP? What is KMM? What is the relationship between KMP and KMM? The answers are as follows: KMP: Kotlin Multiplatform, referring to the technology used to develop applications across multiple platforms using Kotlin, along with its entire ecosystem. KMM: Kotlin Multiplatform for Mobile. One of the primary use cases for KMP is code sharing between mobile platforms. In addition to KMP, several technologies specifically for mobile app development are collectively referred to as KMM. The graph below illustrates the relationship between KMP and KMM. Reference: -- JetBrains "Kotlin brand assets | Kotlin. (n.d.-c). Kotlin Help." , "Get started with Kotlin Multiplatform for mobile | Kotlin. (n.d.). Kotlin Help." Accessed June 1, 2023 Cross-Platform You may wonder about the advantages of cross-platform development, particularly when considering Kotlin Multiplatform Mobile (KMM) as cross-platform solutions. Here are the benefits: Cost-effective: Cross-platform development allows the use of a single codebase across multiple platforms, eliminating the need for separate platform development teams and reducing the cost of app development. Faster deployment: By leveraging a single codebase, developers can create and launch applications on multiple platforms simultaneously, significantly reducing development time and accelerating time to release. Simplified maintenance and updates: By using a single codebase, apps can be easily maintained and updated, allowing changes to be made once and propagated across all platforms. This streamlines the maintenance process and ensures that all users have access to the latest features. Consistent user experience: By using cross-platform development tools and frameworks, a consistent look and feel can be maintained across different platforms, providing a unified user experience. This can lead to improved user satisfaction and user retention. Shared resources and skills: Developers familiar with cross-platform tools and languages can create apps for multiple platforms. This allows for more efficient use of resources and maximizes the return on investment in developer skills and training. History of Cross-Platform Development Tools for Mobile In 2009, PhoneGap was created and later renamed Apache Cordova. In 2011, Xamarin was created by Mono and later acquired by Microsoft. In 2015, React Native was created by Facebook (Meta). In the mid-2010s, designer Frances Berriman and Google Chrome engineer Alex Russell coined the term "progressive web app (PWA)," and Google made several efforts to popularize it. In 2017, Flutter was created by Google. In 2021, KMM was created by JetBrains. This means that KMM is currently the most recent cross-platform solution available. Logo source: -- Apache "Artwork - Apache Cordova. (n.d.)." Accessed June 1, 2023 -- Microsoft "Conceptdev. (n.d.). Xamarin documentation - Xamarin. Microsoft Learn" Accessed June 1, 2023 -- Meta "Introduction · React native." Accessed June 1, 2023 -- Google "Progressive web apps. (n.d.). web.dev." Accessed June 1, 2023 -- Google "Flutter documentation. (n.d.)." Accessed June 1, 2023 -- JetBrains "Kotlin Multiplatform for Cross-Platform development" Accessed June 1, 2023 Why is KMM Different? Shared business logic: KMM reduces code duplication and maintains consistency between Android and iOS by allowing code related to business logic, networking, and data storage to be shared across platforms. True native UI: KMM allows the use of platform-specific tools and languages (e.g. XML for Android and SwiftUI or UIKit for iOS) for UI development, resulting in a more native look and feel compared to other cross-platform solutions. Performance: Kotlin code is compiled into native binaries for each platform, resulting in high-performance applications that are comparable to native development. Seamless integration: KMM can be integrated into existing projects, developers can adopt it incrementally and migrate sharing logic to Kotlin without having to completely rewrite their apps. Interoperability with native libraries: KMM seamlessly interoperates with both Android and iOS native libraries, facilitating the use of existing libraries and frameworks. Benefits of the Kotlin language: Kotlin is a modern and concise language that provides similar functionality to existing alternatives while reducing redundant code, with tool support from JetBrains. The above points are explained in detail below. (1) Shared Business Logic KMM is used when implementing the data, business, and presentation layers in new projects. Flexibility: KMM allows developers to determine the scope of code they want to share, offering a flexible implement balanced with platform-specific code as needed. Consistency assurance: While differences in UI can be easily detected in QA testing, inconsistencies between Android and iOS are difficult to detect in logic. By using KMM, the same code can be used, thus ensuring consistency. (2) Truly Native UI KMM supports native UI, uses native UI components, and follows platform-specific design patterns. Android: xml, Jetpack Compose, etc. iOS: UIKit, SwiftUI, etc. UI performance: KMM uses native UI components, and since the Kotlin code is compiled into native binaries for each platform, its performance is generally comparable to native apps. Easy platform updates: KMM makes it easy for developers to update new platform features and designs. Because it uses the native UI framework for each platform. (3) Performance No JavaScript bridge is required; no reliance on third-party libraries. Uses the system's default rendering engine, reducing resource consumption compared to other cross-platform solutions. Native code compilation: KMM compiles Kotlin code into native binaries for each platform. This native code compilation enhances app efficiency and overall performance. Android: Standard Kotlin/JVM iOS: Kotlin/Native Compiler (Objective-C) (4) Seamless Integration No need to bridge native modules or rewrite existing code. Phased adoption: KMM can be gradually introduced into existing native Android and iOS projects. This allows teams to share business logic, network, and data storage code across platforms in phases, reducing the risks associated with a complete technology switch. Multiple approaches to using KMM modules in iOS CocoaPods Gradle plugin and git submodules Framework Swift Package Manager (SPM): Starting with Kotlin 1.5.30, KMM modules are available in iOS projects using the Swift Package Manager. (5) Interoperability with Native Libraries Access to native APIs and libraries: KMM provides direct access to native APIs and libraries, facilitating easy integration with platform-specific functions and hardware components such as sensors and Bluetooth. Seamless integration with platform-specific code: KMM allows for writing platform-specific code as needed, which is useful when dealing with complex native libraries or accessing features not available through shared Kotlin code. Kotlin/Native: KMM uses Kotlin/Native for iOS. This allows seamless interoperability with Objective-C and Swift code. This means that existing iOS libraries and frameworks can be used without additional bridging or wrapping code. (6) Kotlin Language Benefits Language features: Modern, static typing, null safety, extension functions, data classes, SmartCast, interoperability with Java Tools and support: Kotlin provides exceptional support and first-class integration in Android Studio and IntelliJ IDEA. Industry adoption: Kotlin has seen rapid adoption since becoming the official programming language for Android development. Many backend developers also use Kotlin. What Kind of People are Using KMM? In fact, several companies have adopted Kotlin Multiplatform Mobile (KMM) for mobile app development. Here are some notable examples: Netflix: Netflix uses KMM in some of its internal tools to share code between Android and iOS apps. VMware: VMware uses KMM for cross-platform development of Workspace ONE Intelligent Hub app (employee management tool for Android and iOS). Yandex: Yandex, a Russian multinational technology company, has adopted KMM in several of its mobile apps, including Yandex Maps and Yandex Disk. Quizlet: Quizlet, an online learning platform, uses KMM to share code between Android and iOS apps, improving development efficiency. These companies represent diverse industries, and their adoption of KMM demonstrates the flexibility and usefulness of technology in different contexts. As KMM becomes more popular, it's likely that even more companies will adopt KMM to meet their cross-platform mobile development needs. Reference: -- JetBrains "Case studies. (n.d.). Kotlin Multiplatform." Accessed June 1, 2023 How to Easily Create a KMM Project Given these benefits, would you like to create a KMM project and give it a try? The following is a guide on how to do this. Download the latest Android Studio. In Android Studio, select File > New > New Project. Select Kotlin Multiplatform App in the list of project templates, and click Next. Specify the Name of the new project and click Next. In the iOS framework distribution, select the Regular framework. Keep the default names for Applications and Shared folders. Click Finish. -- JetBrains "Create your first cross-platform app | Kotlin. (n.d.). Kotlin Help." Accessed June 1, 2023 Mobile App Architecture Using KMM The following graph is an example of one of common KMM patterns. This architecture takes full advantage of KMM's characteristic code sharing. Data persistence, including cache, database, network, use cases, and view model are all implemented in KMM. For UI, both Android and iOS use native UI components. Support is provided for both older frameworks such as XML and UIKit, and newer frameworks such as Jetpack Compose and SwiftUI. This architecture allows business logic modules written in Kotlin to be imported into iOS as SDKs. This allows iOS developers to focus on UI development for efficient development. Here's some iOS code for a simple screen with an FAQ list. Except for the common UI Utility Class, this is all that needs to be implemented. #FaqView.swift struct FaqView: View { private let viewModel = FaqViewModel() @State var state: FaqContractState init() { state = viewModel.createInitialState() } var body: some View { NavigationView { listView() } .onAppear { viewModel.uiState.collect(collector: Collector<FaqContractState> { self.state = $0 } ) { possibleError in print("finished with possible error") } } } private func listView() -> AnyView { manageResourceState( resourceState: state.uiState, successView: { data in guard let list = data as? [Faq] else { return AnyView(Text("error")) } return AnyView( List { ForEach(list, id: \.self) { item in Text(item.description) } } ) }, onTryAgain: { viewModel.setEvent(event: FaqContractEvent.Retry()) }, onCheckAgain: { viewModel.setEvent(event: FaqContractEvent.Retry()) } ) } } That's not all about KMM. KMM has even more potential! Architecture That Shares UI Code In addition to business logic code, KMM can also share UI code using Compose Multiplatform. As we discussed earlier, Kotlin Multiplatform Mobile (KMM) is primarily used for implementing shared business logic, but it also supports shared UI development. Compose Multiplatform is a declarative framework for sharing UI across multiple platforms using Kotlin. Based on Jetpack Compose, it was developed by JetBrains and open source contributors. Combining KMM with Compose Multiplatform allows for the building of both logic code and UI using the Kotlin language. Reference: -- JetBrains "Kotlin brand assets | Kotlin. (n.d.-c). Kotlin Help." , "Compose multiplatform UI framework | JetBrains. (n.d.). JetBrains: Developer Tools for Professionals and Teams." Accessed June 1, 2023 Comparison of Different Patterns of KMM Architecture Assuming a mobile project is being developed, the estimated workloads for each client are as follows: UI: 2 people, Presentation: 1 person, Business/Domain: 1 person, Data/Core: 1 person The workloads saved from this point is based on the percentage of code written by KMM. Pattern A B C D UI 2*2 2*2 2*2 2 Presentation 1*2 1*2 1 1 Business/Domain 1*2 1 1 1 Data/Core 1 1 1 1 Total 9 8 7 5 workload cost -10% -20% -30% -50% KMM can reduce workloads by up to 50%. The biggest advantage of KMM compared to other cross-platform solutions is its flexibility in code sharing. How much code to share with KMM is entirely up to us. Other cross-platform solutions do not offer this level of flexibility. Summary Cons of KMM Of course, every tool has its drawbacks. KMM is no exception. Limited platform support: Kotlin Multiplatform Mobile can target multiple platforms, but not all platforms are supported. For example, it does not currently support web or desktop applications. Learning cost: If you are not familiar with Kotlin, there is a learning cost to effectively use it for multi-platform development. Framework compatibility: Kotlin Multiplatform Mobile can be used with various frameworks, but is not compatible with all of them. This limits your options and may require you to work within certain constraints. Maintenance overhead: Maintaining a multiplatform codebase can be more complex than maintaining a separate codebase for each platform. This added complexity can lead to increased overhead in testing, debugging, and maintenance. Tool limitations: Some tools and libraries may not be compatible with Kotlin Multiplatform Mobile, making development more complicated or requiring the search for alternative solutions. Applications As mentioned above, integrating KMM's architecture into a project can be considered in various situations, each with it’s pros and cons. Pattern A B C D General existing project ✓ ✓ ✓ ? Simple existing project ✓ ✓ ✓ ✓ Complex existing project ✓ ✓ ✓ ✗ New project ✓ ✓ ✓ ✓ Prototype ✓ ✓ ✓ ✓ With the technical benefits covered, let's get back to the actual development process. Like most mobile development teams, ours is small. Given our limited engineering resources, when faced with a significant change, such as upgrading from version 1.0 to 2.0, we need to collaborate with other divisions and both onsite and offshore outsourcing teams to ensure a quick release. However, there are several problems in this process: Seamless collaboration between different teams is challenging. With more developers and different teams in different offices, communication costs increase. It becomes difficult to maintain consistency across different teams. Working with external teams makes it difficult to manage the security of sensitive information. KMM can address almost all of these problems by developing core modules, defining protocols, and adopting a separate approach for UI and logic development: Allows each team to focus on their part. Can greatly facilitate collaboration. Reduces the time and cost required for communication. By having the core modules developed by the KMM team on a consistent basis, most inconsistencies are eliminated in advance. Although KMM supports a single codebase, the separation of the UI and logic layers allows for the use of multiple repositories. The core modules are developed by the KMM team and the SDK is provided to external teams. This eliminates the need for the source code to be disclosed to external teams and reduces the risk of leaking confidential information. This is difficult to achieve with other cross-platform technology solutions. In conclusion, it can be said that KMM brings significant benefits not only in terms of technical advantages but also in fostering cooperation across divisions and companies. Conclusion Given the importance of KMM in new projects and its potential for significant workload savings, we have already integrated KMM into new projects for the next major release. We will continue to monitor new technologies and tools related to KMM and seek opportunities to further enhance efficiency.
アバター
Introduction Hello. My name is Nakaguchi and I am the team leader of the iOS team in the Mobile Development Group. In my day-to-day job, I'm involved in iOS development for: KINTO Kantan Moushikomi App (KINTO Easy Application App) Prism Japan ( Smartphone app version / Recently released web version ) As a retrospective of iOSDC Japan 2024, held from Thursday, August 22nd to Saturday, August 24th, we hosted the [iOSDC JAPAN 2024 AFTER PARTY] on Monday, September 9th, 2024. I'd like to reflect on why I held the event, the preparations leading up to it, and how it went. In particular, regarding the part about "why it was held," I will present my own thoughts, and I would be happy if many people can relate to them. This blog is for: Those who participated in this event Those who attended iOSDC Those who often participate or would like to participate in events Those who organize or would like to organize events I'm also writing this as a Tech Blog post to share my experience with as many people as possible, because my own motivation has exploded by hosting this event. Why I Held the Event This event has been planned in my mind since around April. If you ask me why I planned it, I honestly don't think I'd be able to express it in words. Since I took on the role of team leader in October of last year, I have made an effort to attend many events that interest me, not only those related to iOS, but also those related to development productivity, organizational management, engineering managers, and so on. In the midst of this, I noticed the following feelings arising. Participating in an event really boosts your motivation. The people who speak at events and organize them are so cool! If I had to put my feelings into words, it would be: "It's kind of cool! I want to host an event myself!" That's how I felt back in April. However, the purpose of an event that involves investing a lot of resources, such as money, time, and people, cannot be explained simply by "because it's cool." After that, I begin to struggle within myself about the significance of hosting an event. Even now that the event has ended, I don't think I've yet reached a clear answer. (I'm just grateful that we were able to hold the event under such ambiguous circumstances.) When hosting an event as part of an organization, certain expectations are inevitably placed upon you. Commonly mentioned benefits include "increasing the organization's presence," "spreading the word about services," "leading to recruitment," etc. I think these are all great benefits of holding an event properly, and if these results appear, I think the event can be called a great success. However, there are some aspects that I personally don't feel quite right about. I believe that most participants in IT industry events attend for the purpose of self-improvement, such as "I want to acquire new knowledge," "I want to expand my network," or "I enjoy participating in the event itself," and I think it is very rare for people to attend events because they want to know what kind of organization the organizer is, what services they offer, or want to change jobs to that company. In the midst of this, after struggling with the significance of holding events, I came to my own conclusion. "I want my motivation to be contagious to as many people as possible." As I mentioned above, when I participate in an event, I feel a huge boost in motivation, and I think many others feel the same way. I believe that if there is even one more person who wants to work harder tomorrow, the accumulation of those efforts will lead to the betterment of the world. Also, as motivation increases, some people may want to host events like I did, or speak at one. In turn, others will see this and want to do the same. I believe that good motivation like this is surely contagious! So, at this stage, I decided to hold this event with the thought that "I want my motivation to be contagious to as many people as possible" as the significance of the event (although I hadn’t organized my thoughts to this extent back in April when I first came up with the idea). (And, from an organizational perspective, the mere fact that it increases motivation does not mean that we should start holding events one after another, so it looks like the days of struggle will continue for a while.) Next, I would like to give you an overview of this event. Event Overview Event name: iOSDC JAPAN 2024 AFTER PARTY Date and time: Monday, September 9, 2024 from 19:00 Participants: Around 20 people Three companies: WealthNavi, TimeTree, and us held a joint meeting as the iOSDC Retrospective. There were three LT presentations, one from each company, plus a panel discussion with three people, one from each company. Now, let me introduce the process leading up to this event. Until the Event In April, I came up with the idea to hold a mobile development-related event, but I was unsure how to proceed. We have a Developer Relations Group (DevRel) that provides support for event management, so I thought that if I reached out to them, I could run the event smoothly without any issues. On the other hand, Attracting attendees Calling for speakers Deciding on the theme of the event are challenging even with the support of our Developer Relations Group. Therefore, we've determined that organizing a mobile-related event on our own would be difficult. Under this circumstance, we wanted to ask Findy for their help, as they put a lot of effort into hosting events and have extensive know-how in attracting attendees and recruiting speakers. So, we attended this event which was held in May . I have also posted a blog Event Participation Report , so please take a look. This event gave us the opportunity to exchange information with the person in charge at Findy. After much discussion about what kind of event to hold, we were introduced to WealthNavi and TimeTree, and decided to hold an iOSDC retrospective event. I want to extend my thanks to Findy for their advice and help in organizing the event, and to WealthNavi and TimeTree for co-hosting the event. After the three companies decided to hold an iOSDC Retrospective, many things were decided smoothly, including: How to structure the event Speakers and panelists for the panel discussion Date and time of the event Now that the event recruitment page on Connpass has been successfully completed, the next step is to recruit participants. This time, all three companies shared the desire to place emphasis on communication with event participants, so the event was offline only. Since the event was held in our company's event space, we aimed to recruit around 30 people, given the capacity. We opened the Connpass page on Thursday, August 8th, 2024, and within a few days we had about 10 people register to attend, which we thought was a good number of participants. However, the actual event promotion would take place from August 22nd to 24th, when the iOSDC would be held, so I thought it would be up to us to see how much we could increase participation during that period. This year, we displayed our first sponsor booth, which allowed us to promote the event there and carry out PR by posting on our official X page during the iOSDC period. As a result, the number of registrations that increased during the iOSDC period was **"0"** ...! *To be honest, I was lazy about the event's call for participants.* Looking back, I think there was a need to improve the way we promoted the event at the sponsor booth. Rather than just handing out flyers, we should have put more thought into creating a flow of people to register on the spot (for example, handing out novelties to people who registered). Here is a reminder for next time. In fact, when we checked the statistics on the event page on Connpass, we could see that there were absolutely no registrations between August 22nd (Thu) and 24th (Sat), and that there was no increase in page views at all. ![](/assets/blog/authors/nakaguchi/2024-09-12-after-iosdc/connpass.png)*Statistics confirmed by Connpass* After that, up until Monday, September 9th, participants gradually registered at the pace shown in the image above. I also had the opportunity to take the time to announce the event when I attended another company's event, so we were able to have 24 participants registered as of the day of the event. I felt that the theme of "iOSDC Retrospective Event" was effective in drawing in a certain number of people. Although we did not reach our initial goal of 30 registrations, I personally felt that the number of registrants was more than sufficient for the first organized event. Now all that was left was to wait for the day. On the Day of Event These kinds of events are often subject to cancellation on the day of the event for a variety of reasons. In fact, several participants unfortunately canceled on the day of this event as well. However, with the day arriving, I didn't have the time to be overly excited or upset about the increase or decrease in the number of participants. We focused on making this an event that was worth attending for co-hosts WealthNavi and TimeTree, as well as for all participants who joined us on the day. Here's a quick look back at what happened on the day. We waited nervously for everyone to arrive. The venue seemed to be set up. Venue set-up completed It was 7pm, and with WealthNavi, TimeTree, and all the participants present, the first LT session was about to begin. "DX: Digital transformation starting with Package.swift" presented by Muta-san from WealthNavi. Muta-san's presentation I learned a lot from his explanation of the basics of Swift Package Manager, including aspects that I thought I knew but actually did not. I believe it was a valuable opportunity to hear about the initiatives of WealthNavi and what they envision for the future. I also learned a lot from the explanation of Swift 6, which is coming up soon. Next is the second LT. "Morphological Analysis of iSODC Proposals to Explore Trend Transitions" presented by Sakaguchi-san from TimeTree. Sakaguchi-san's presentation I was very interested in this presentation from the moment I saw the title. I have attended iOSDC several times in the past, and I feel that there are certain trends in the sessions, which was interesting to see reflected in the proposals. In addition, this analysis tool was created using Xcode, and it was fun to see it being demonstrated on a simulator during the presentation. Next is the third LT. "I want to share what we did before our first exhibit at iOSDC" presented by Hinomori-san from KINTO Technologies. Hinomori-san's presentation Since this was our first time exhibiting at a sponsor booth, he shared the challenges we faced during the preparation period. I was also involved in preparing some of the exhibits, and it was quite difficult to figure out through trial and error what kind of content would resonate with visitors and how to make it more visually appealing. Please take a look at what we produced as a sponsor, which is introduced in more detail on the Tech Blog here . Next, there was a panel discussion, followed by a break and a toast. The panelists are: Cho-san from WealthNavi Masaichi-san from TimeTree Hinomori-san from KINTO Technologies And I was the moderator of the session. Panel Discussion Members These topics were prepared in advance as we looked back on the iOSDC. The topics were decided after interviewing the panelists in advance to find out what kind of content they would be interested in. Panel Discussion Topics Due to time constraints, we were unable to discuss all the topics, but we made a conscious effort to proceed by observing the topic at hand and picking out topics that fit the flow of the moment. They talked about the status of iOS development at each company, their efforts towards iOSDC, and the changes this year compared to previous years. Panelists Finally, a group photo was taken with all participants. Group photo Thoughts After the Event As I mentioned at the beginning, I started planning for this event around April and was able to hold it. I was constantly anxious about whether the event could be held smoothly, whether the participants would show up, and whether my moderation on the day would go well. I personally feel that we were able to hold a very successful event, thanks to the cooperation of WealthNavi and TimeTree, our co-hosts, as well as the support of the Developer Relations Group and the organizing staff on the day of the event. Of course, everyone who participated on the day made the event a great success. I would like to express my sincere gratitude to everyone who was involved in this event. ● What I liked It was invaluable to be able to connect with other companies such as WealthNavi, TimeTree, and Findy when hosting the event. Additionally, this was my first time organizing an event, I gained confidence from being able to successfully complete it. ● What I'd like to improve in the future As I mentioned earlier, I find it quite challenging to attract participants. Since I haven't found a good solution to this yet, I'd like to carefully consider it with everyone involved the next time we organize the event. I also wish more team members from our iOS team could have participated this event. At this event, Assistant Manager Hinomori-san took the stage as a LT speaker and panelist. While he usually has many opportunities to speak at events, I wanted to encourage team members who don't often get the chance to take on that challenge. However, when we reached out for speakers within the company, there were no volunteers from the team members, so we decided to have Hinomori-san take the stage. I personally feel that there are major areas for improvement going forward, such as making efforts to lower the hurdles to speaking at the internal recruitment stage and establishing a support system for preparing for speaking sessions. Conclusion In October, we are planning to hold a review event for Doroidkaigi 2024 together with WealthNavi and TimeTree, and we hope to continue holding such events on an irregular basis in the future. As I said at the beginning, "I want to spread motivation to as many people as possible," and I feel that the person who was most motivated by this event was none other than myself. If there were participants who felt that their motivation had increased, then I would consider this event a great success. I'd like to continue to motivate everyone involved through various activities, including holding events like this one.
アバター
Hello! I’m Wada ( @cognac_n ), an AI Evangelist at KINTO Technologies. How do you manage your prompts? Today, I will introduce Prompty, a tool that simplifies creating/editing , testing , implementing , and organizing prompts! 1. What is Prompty? Prompty is a tool designed to streamline prompt development for large language models (LLMs). This enables centralization of prompts and parameters in YAML format, making it ideal for version control on GitHub and improving collaboration in team environments. Using the Visual Studio Code (VS Code) extension can greatly improve the efficiency of prompt engineering. Benefits of Introducing Prompty Although integration with Azure AI Studio and Prompt Flow offers benefits, this article will focus on the integration with VS Code. Who should consider using Prompty: Those looking to speed up prompt development Developers who need version control for prompts Teams collaborating on prompt creation Anyone wanting to simplify prompt execution on the application side https://github.com/microsoft/prompty 2. Prerequisites Requirements (at the time of writing) Python 3.9 or higher Vs Code (if using the extension) OpenAI API Key or Azure OpenAI Endpoint (depending on the LLM in use) Installation and initial setup Install the VS Code extension https://marketplace.visualstudio.com/items?itemName=ms-toolsai.prompty Use pip or other package managers to install the necessary libraries pip install prompty https://pypi.org/project/prompty/ 3. Try It Out 3-1. Create a New Prompty File Right-click in the Explorer tab and select "New Prompty" to create a template. ![New Prompty](/assets/blog/authors/s.wada/20240821/image_2.png =350x) New Prompty The created template is as follows: --- name: ExamplePrompt description: A prompt that uses context to ground an incoming question authors: - Seth Juarez model: api: chat configuration: type: azure_openai azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT} azure_deployment: <your-deployment> parameters: max_tokens: 3000 sample: firstName: Seth context: > The Alpine Explorer Tent boasts a detachable divider for privacy, numerous mesh windows and adjustable vents for ventilation, and a waterproof design. It even has a built-in gear loft for storing your outdoor essentials. In short, it's a blend of privacy, comfort, and convenience, making it your second home in the heart of nature! question: What can you tell me about your tents? --- system: You are an AI assistant who helps people find information. As the assistant, you answer questions briefly, succinctly, and in a personable manner using markdown and even add some personal flair with appropriate emojis. # Customer You are helping {{firstName}} to find answers to their questions. Use their name to address them in your responses. # Context Use the following context to provide a more personalized response to {{firstName}}: {{context}} user: {{question}} In the area enclosed by --- , specify parameters. Below this section, add the main content of the prompt. You can define roles using system: or user: . Basic Parameter Overview Parameter Description name Specifies the name of the prompt description Provides a description of the prompt authors Includes information about the prompt creators model Details the AI model used in the prompt sample If the prompt contains placeholders such as {{context}} , the content specified here is substituted during testing 3-2. Configuring API Keys and Parameters There are several ways to set the required API keys, endpoint information, and execution parameters. [Option 1] Specifying in the .prompty file This involves directly adding these details to the .prompty file. model: api: chat configuration: type: azure_openai azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT} azure_deployment: <your-deployment> parameters: max_tokens: 3000 You can also reference environment variables, such as ${env:AZURE_OPENAI_ENDPOINT} . However, please note that azure_openai_api_key cannot be configured in this way. ![azure_openai_api_key cannot be written directly in the .prompty file] (/assets/blog/authors/s.wada/20240821/image_3.png =750x) azure_openai_api_key cannot be written directly in the .prompty file [Option 2] Configuring with settings.json Another approach is to use VS Code’s settings.json . If the settings are incomplete and you click the play button in the upper-right corner, you will be prompted to edit settings.json. You can create multiple configurations beyond the default definition and switch between them during testing. When type is set to azure_openai and api_key is left empty, the process will direct you to authenticate using Azure Entra ID, as explained later. { "prompty.modelConfigurations": [ { "name": "default", "type": "azure_openai", "api_version": "2023-12-01-preview", "azure_endpoint": "${env:AZURE_OPENAI_ENDPOINT}", "azure_deployment": "", "api_key": "${env:AZURE_OPENAI_API_KEY}" }, { "name": "gpt-3.5-turbo", "type": "openai", "api_key": "${env:OPENAI_API_KEY}", "organization": "${env:OPENAI_ORG_ID}", "base_url": "${env:OPENAI_BASE_URL}" } ] } [Option 3] Configuration with a .env file By creating a .env file, environment variables can be read directly from it. Note that the .env file must be located in the same directory as the .prompty file you are using. This setup is especially convenient for local testing. AZURE_OPENAI_API_KEY=YOUR_AZURE_OPENAI_API_KEY AZURE_OPENAI_ENDPOINT=YOUR_AZURE_OPENAI_ENDPOINT AZURE_OPENAI_API_VERSION=YOUR_AZURE_OPENAI_API_VERSION [Option 4] Configuring with Azure Entra ID By signing in with an Azure Entra ID that has the appropriate permissions, you can access the API. I haven’t tested this option yet 3-3. Running Prompts in VS Code You can easily execute prompts by clicking the play button in the upper right corner. The results are displayed in the OUTPUT section. To view raw data, including placeholder substitution and token usage, select "Prompty Output(Verbose)" from the dropdown in the OUTPUT panel. This option is useful for checking detailed information. Use the Play button in the upper right to run the prompt Results can be seen in the OUTPUT section 3-4. Other Parameters Various parameters are introduced on the following page. Defining options like inputs and outputs , especially when using json mode, improves prompt visibility, so be sure to set them. inputs: firstName: type: str description: The first name of the person asking the question. context: type: str description: The context or description of the item or topic being discussed. question: type: str description: The specific question being asked. 3-5. Integrating with an Application The syntax for integration may vary depending on the library used in your application. As Prompty is frequently updated, be sure to check the latest documentation regularly. Here’s an example code snippet demonstrating the use of Prompty with Prompt Flow . This allows for simple prompt execution. from promptflow.core import Prompty, AzureOpenAIModelConfiguration # Set up configuration to load Prompty using AzureOpenAIModelConfiguration configuration = AzureOpenAIModelConfiguration( azure_deployment="gpt-4o", # Specify the deployment name for Azure OpenAI api_key="${env:AZURE_OPENAI_API_KEY}", # Retrieve the API key from environment variables api_version="${env:AZURE_OPENAI_API_VERSION}", # Retrieve the API version from environment variables azure_endpoint="${env:AZURE_OPENAI_ENDPOINT}", # Retrieve the Azure endpoint from environment variables ) # Configure to override model parameters # Here, max_tokens is overridden as an example override_model = {"configuration": configuration, "max_tokens": 2048} # Load Prompty with the overridden model settings prompty = Prompty.load( source="to_your_prompty_file_path", # Specify the Prompty file to use model=override_model # Apply the overridden model settings ) # Execute prompty result = prompty( firstName=first_name, context=context, question=question # Execute Prompty based on the provided text and obtain the result 4. Summary Prompty is a powerful tool that can significantly streamline prompt engineering tasks. In particular, the development environment integrated with VS Code allows for seamless creation , testing , implementation , and management of prompts, making it highly user-friendly. Mastering Prompty can greatly enhance the efficiency and quality of prompt engineering. I encourage everyone to give it a try! Benefits of Introducing Prompty (Repost) We Are Hiring! At KINTO Technologies, we are seeking colleagues to help drive the adoption of generative AI in our business. We are open to casual interviews, so if you’re even slightly interested, please contact us via the link below or through X DM . We look forward to hearing from you! https://hrmos.co/pages/kinto-technologies/jobs/1955878275904303115 Learn more about how we work with generative AI here. https://blog.kinto-technologies.com/posts/2024-01-26-GenerativeAIDevelopProject/ Thank you for reading this far.
アバター
はじめに こんにちは、KINTOテクノロジーズ(以下KTC)プラットフォームグループでSREチームのリーダーをしている長内です。この記事ではSREチームのミッションとビジョンを策定したお話を書きたいと思います。 ※決まったミッションだけ見たい方は こちら へ飛んでください。 なぜミッション、ビジョンを決めたか 今回ミッションとビジョンを決めるに至った経緯として大きく3つあります。 1. チームメンバーからの提言 SREチームは2021年の1月に発足しましたが、紆余曲折を経て今年の3月末の時点では1人体制でした。そんな中、4月に入社したメンバーが「SREチームのミッション、ビジョン作りませんか?」と提案してきたところが発端になります。彼の前職では会社のミッションやビジョンが日常の業務に根付いており、うまく機能していたとのことでしたが、自分としてはこの時点ではあまり必要性を理解しておらず、時間があったらやろうねくらいの気持ちでした(ごめんね)。 2. チームとしてのロードマップを上位層に示す必要性 そんな中、SREチームは2人体制でリソースに余裕がなかったため、採用強化のための施策を打とうとしていました。 その過程で、SREチームはどういうことを実現したくて、そのためにどういった課題感があり、それを解決するためにどういった人がどれだけ必要なのかを明確に上位層に伝える必要性が出てきました。 それに伴い、SREチームのロードマップを作成することになったのですが、もう少し抽象的なレイヤーとしてSREチームの活動の指針となるミッションやビジョンがあった方が良いよね、となってきました。 3. SREという言葉の多様化 そんなことから色んな企業のSREチームのミッションやビジョンを眺めていると、こんなスライドがありました。 SREは何を目指すのか より 個人的にはこれが一番しっくりきました。KTCにはSREチームの他に、Platform Engineering、Cloud Infrastructure、DBRE、CCoE、Securityなどの横断部署がチームやグループとして存在しています。SREという言葉が扱う内容は非常に広いと思うので、こうした周りの環境がある前提で自分たちがやるべきことを明確化しておく必要性を感じました。 どう決めたか ということで、ミッションとビジョンを決めよう!となりましたが、どう決めるかの指針もなく、手探りでのスタートでした。 まず、どれくらいの期間でどれくらい時間を使って決めるかという点ですが、週次MTGで徐々に決めるのはだらだら続きそうということもあり、ある程度短期間で決めようとなりました。 1日みっちり時間を取って実施することも考えましたが、アイデアがその日のコンディションに左右されるところもありそうなので、1日1時間を土日を挟んだ5営業日で実施することにしました。実際にやってみての感想ですが、複数日で実施した方が個人的には良いと感じました(入浴中や寝る前にアイデアが湧いてきたりしました)。 続いてどのように決めたかですが、 Google re:Work を参考に作ることにしました。 この中のマネージャーのテーマにある チームのビジョンを設定してメンバーに伝える に沿って進めていきました。 今回はミッション、ビジョンの策定が目的のため、当該コンテンツのコアバリュー、パーパス、ミッションの決定までに留めました。 ビジョンに関しては「ミッションを実現した時に、SREチームひいては会社全体がどういう姿になっていてほしいか?」という観点で、決まったミッションを元に考えることにしました。 1日目 大切にしたい価値観の洗い出し まず、初日はチームメンバー各々が大切にしたい価値観の洗い出しから始めることにしました。コラボレーションツールにはmiroを使用し、技術的でないものも含めてどういったことを大切にしたいかを各々付箋に書いていきます。大切にしたいものと言うとなかなかアイデアが出なかったりするので、逆にこういった状態になると嫌だよねという観点から見つける方法も有効だと思いました。 続いて各々が高く評価している人物の価値観をざっくばらんに話しました。 2日目 チームのコアバリューを深掘り 2日目は初日に出した価値観についてそれぞれ話しました。異なる内容でも、なぜなぜ分析のように「何でその価値観を大事にしたいのか?」といった感じで少しずつ抽象化していくと似たような価値観に行き着いたりしたので、そういったものをメモしながら進めるとミッションを決める時に役立つかもしれません。 続いて共感できる価値観についての説明や具体的な行動について考えていきます。今回は2人で実施しているので、互いの価値観から共感できるものをいくつかピックアップし、それらの価値観について深掘りしました。一例として「良いコラボレーションでより良いアウトプットを」といった価値観がありましたが、少し抽象的な表現だったので良いコラボレーション、良いアウトプットとは何だろうか?と具体的な言葉に置き換えていくとイメージが湧いてきました。 3日目 チームの存在理由を検討 3日目はパーパス(チームの存在理由)について考えました。 このチームの存在理由は? と題した6つの質問に対して議論しながら回答を作っていきます。 ちょっと注意した方がいいと感じたのは、これらの質問は現状についてのものになるので、根本的な存在理由の回答にバイアスがかかる可能性がある点です(特に現状の組織に変化をもたらしたい場合)。 今までやってきたことなどを振り返り、抽象化してなぜそれをやってきたのかを改めて見つめ直すことで、根本的な存在理由になりそうなものが見えてきました。 4日目 ミッションの決定 4日目はついにミッションを決めていきます。まずは 自己省察 として3つの質問に各々が考えを付箋に書き出しました。 そして、1日目からの内容を踏まえてミッションを決めていきました。 正直言葉が降りてくるか勝負なところはありましたが、これまで実施した内容や会話などからキーワードとなるような言葉をピックアップし、それを満たすような表現にしました。 また、それが5つのミッションの特性を満たしているかを検討して決定しました。 5日目 ビジョンの決定 5日目は決定したミッションを実現した時に、SREチームひいては会社全体がどういう姿になっていてほしいかを想像してビジョンを決定しました。 実際に作成したMiroボード(雰囲気のみ) 決まったミッション、ビジョン 実際に決まったミッションとビジョンが以下になります。 ミッションついて説明すると、まず「プロダクトを最速で提供できるようにする」という部分ですが、KTCでは様々なプロダクトが存在しています。それらができるだけ早くユーザーの元に機能を提供できるようにし、フィードバックを得られる環境を作っていきたいと考えています。 ですが、早く提供するだけでは不十分で、ユーザーにとって「価値ある」プロダクトを届ける必要があります。 また、いくら価値のあるプロダクトを早く提供できても、ユーザーが満足して使えない状態では意味がないので「信頼性の高い」という言葉を付け加えました。 ビジョンについては、信頼性の高い価値あるプロダクトを最速で提供できるようになったKTCではどのような状態になっているかを想像し、「信頼性の高い価値あるプロダクト」という品質面と「最速で提供できる」という速度感を両立させるために必要なのは、サービスレベルに基づいて開発と運用のバランスを取ることだという結論になりました。 終わりに 無事にチームのミッションとビジョンを策定できました。まだ策定して間もないですが、やろうとしていることに対してミッションと照らし合わせてこれは本当にやるべきなのか、やる場合にどこまでをやるべきかといった会話が生まれるようになり、チームの指針としてうまく機能してくれそうな予感がしています。 ですが、策定したからといって終わりではありません。ミッション、ビジョン実現のためのロードマップを作成し、チーム一丸となって取り組んでいきたいと思っています。 また、SREチームでは一緒に働く仲間を募集しています。少しでも興味を持った方はぜひお気軽にご連絡ください。お待ちしております! https://hrmos.co/pages/kinto-technologies/jobs/1811937538128224258
アバター
Introduction Hello! My name is Romie. I’m in the Mobile App Development Group, and I’m responsible for developing the my route app for Android. At KINTO Technologies Corporation (KTC), we have access to Udemy Business accounts, giving us access to a wide range of courses! This time, I chose the course Kotlin Coroutines and Flow for Android Development . Taught entirely in English, it covers the basics of asynchronous processing in Android, and demonstrates how to use Coroutines and Flow. Reflections on the Course Here are my honest impressions of the course: The English is straightforward and easy to understand. Aside from Android-specific terms, there are almost no difficult words. So, I highly recommend this course for anyone who has moved beyond the beginner stage and wants to learn more about asynchronous processing, Coroutines, and Flow, while also practicing their English ! Topics that left an impression on me Coroutines and Flow differ from traditional asynchronous processing because they run outside the main thread, making asynchronous tasks easier to write. Additionally, because Coroutines and Flow are part of Kotlin's standard library, there’s no need to include any additional libraries, which is a significant advantage! While these are just the basics, I’ve highlighted the key points below for future reference. Callback A callback is a basic method for handling asynchronous processes. You can branch the process using onResponse/onFailure . exampleCallback1()!!.enqueue(object : Callback<Any> { override fun onFailure(call: Call<Any>, t: Throwable) { println("exampleCallback1 : Error - onFailure") } override fun onResponse() { if (response.isSuccessful) { println("exampleCallback1 : Success") } else { println("exampleCallback1 : Error - isSuccessful is false") } } }) RxJava In RxJava, you can branch the process within subscribeBy using onSuccess and onError. exampleRxJava() .flatMap { result -> example2() } .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .subscribeBy( onSuccess = { println("Success") }, onError = { println("Error") } ) .addTo(CompositeDisposable()) async/await With async/await, asynchronous processing is handled, and awaitAll is used to gather and process the results together. This is a commonly used pattern in traditional asynchronous processing workflows. viewModelScope.launch { try { val resultAsyncAwait = awaitAll( async { exampleAsyncAwait1() }, async { exampleAsyncAwait2() }, async { exampleAsyncAwait3() } ) println("Success") } catch (exception: Exception) { println("Error") } } viewModelScope.launch { try { val resultAsyncAwait = exampleAsyncAwait() .map { result -> async { multiExampleAsyncAwait() } }.awaitAll() println("Success") } catch (exception: Exception) { println("Error") } } withTimeout withTimeout performs timeout processing. In withTimeout, an exception is thrown when a timeout occurs. viewModelScope.launch { try { withTimeout(1000L) { exampleWithTimeout() } println("Success") } catch (timeoutCancellationException: TimeoutCancellationException) { Println("Error due to timeout") } catch (exception: Exception) { println("Error") } } withTimeoutOrNull withTimeoutOrNull also handles timeouts, but unlike withTimeout, it returns null. viewModelScope.launch { try { val resultWithTimeoutOrNull = withTimeoutOrNull(timeout) { exampleWithTimeoutOrNull() } if (resultWithTimeoutOrNull != null) { println("Success") } else { Println("Error due to timeout") } } catch (exception: Exception) { println("Error") } } Database operations with Room and Coroutines When combining Room and Coroutines, start by checking if the database is empty; if it is, proceed to insert the required values. Since retrieving values from the database can potentially throw an exception, the operation is wrapped in a try/catch block. Currently, Room and Coroutines are frequently used with Flow to handle asynchronous operations in Android development. viewModelScope.launch { val resultDatabaseRoom = databaseRoom.exac() if (resultDatabaseRoom.isEmpty()) { Println("The database is empty") } else { Println("There are values in the database") } try { val examDataList = getValue() for (resultExam in examDataList) { database.insert(resultExam) } println("Success") } catch (exception: Exception) { println("Error") } } Flow This is a basic Flow setup. In onStart, the Flow emits an initial value, and in onCompletion, a log message is generated to indicate that the process has finished. sealed class UiState { data object Loading : UiState() data class Success(val stockList: List<Stock>) : UiState() data class Error(val message: String) : UiState() } val anythingAsLiveData: LiveData<UiState> = anythingDataSource .map { anyList -> UiState.Success(anyList) as UiState } .onStart { emit(UiState.Loading) } .onCompletion { Timber.tag("Flow").d("Flow has completed.") } .asLiveData() SharedFlow/StateFlow SharedFlow/StateFlow are types of Flow. Flow is converted into StateFlow using stateIn. The main difference between Flow and SharedFlow is that Flow does not retain emitted values, whereas SharedFlow does. Unlike StateFlow, which holds a single value that can be accessed directly, SharedFlow allows multiple collectors to receive emitted values. While SharedFlow can also retain values, it doesn't store the current state like StateFlow; instead, it emits a sequence of values to all collectors. sealed class UiState { data object Loading : UiState() data class Success(val stockList: List<Stock>) : UiState() data class Error(val message: String) : UiState() } val anythingAsFlow: StateFlow<UiState> = anythingDataSource .map { anyList -> UiState.Success(anyList) as UiState } .onCompletion { Timber.tag("Flow").d("Flow has completed.") }.stateIn( scope = viewModelScope, initialValue = UiState.Loading, started = SharingStarted.WhileSubscribed(stopTimeoutMillis = 5000) ) Summary Although much of the content covers the basics, the course was primarily in English, which made it take longer to go through. I believe that after gaining a better overall understanding of asynchronous processing, a second review will deepen my comprehension. However, on the second pass, it seems that studying English will take priority. Thank you for reading to the end.
アバター
Introduction Hello. I’m Shimamura from the Platform Group’s Operation Tool Management Team, where I work in platform engineering, focusing on tool development and operations. I'm Yamada, also part of the Platform Group’s Operation Tool Management Team, where I focus on developing in-house tools. At KINTO Technologies, we utilize Amazon ECS + Fargate as our application platform. For CI/CD, we use GitHub Actions. In AWS ECS’s Blue/Green deployment system, the "CODE_DEPLOY" option is primarily used for the DeploymentController, and we believe there are few real-world examples where "EXTERNAL" (third-party control) is implemented. At the CI/CD Conference 2023 hosted by CloudNative Days, we also encountered an example of migrating from ECS to Kubernetes specifically to enable Blue/Green deployments. ( Chiisaku Hajimeru Blue/Green Deployment (Blue/Green Deployment That Starts Small) .) However, we wondered if it might be possible to perform Blue/Green deployments in ECS without the limitations of CodeDeploy's conditions. We also considered that offering multiple deployment methods could benefit the departments developing applications. With that in mind, we began preparations to explore these options. Indeed, despite CODE_DEPLOY being the more common setting and limited documentation available on using EXTERNAL for this purpose, we successfully implemented a system that supports it for the application teams. We'll share this as a real-world example of implementing Blue/Green deployment using external pipeline tools with ECS (Fargate). Background Issues Relying solely on ECS rolling updates may not fully meet the requirements for future releases. It’s essential to offer a variety of deployment methods and deploy applications in a way that aligns with their specific characteristics. Solution method As a first step, we decided to introduce Blue/Green deployment on ECS. Canary releases may present challenges in the future, but since we successfully implemented Blue/Green deployment in this form, we anticipate being able to adapt it to support configurations like setting the influx rate and other parameters via the CLI. Design Checking with CODE_DEPLOY If you search for “ECS Blue/Green deployment,” you will find a wide variety of things. However, simply leaving it at that isn’t ideal, so we’d like to provide a summary of the key points and overall setup This is the configuration. You configure various settings in CodeDeploy, create a new task associated with the task definition, and adjust the influx rate according to the deployment settings. You can either switch over all at once, test a portion initially, or gradually increase the deployment—depending on your needs. Specifications we initially thought might be unattainable When we reviewed the environment and operation under CodeDeploy, certain aspects raised concerns for us. It could all come down to specific settings, so if you have any insights, please feel free to share. We plan to verify the operation by running a test system for a certain period, allowing for customer review and other checks. The system can be maintained for about a day, but the deployment will fail if the switchover button isn't pressed once that timeframe has elapsed. We’d like the option to terminate the old application at a chosen time after the switchover. In CodeDeploy, a time limit can be configured, but it doesn’t allow for arbitrary timing. Reverting back through the console appears to be a complex process. The process becomes cumbersome because, due to the permissions setup, you need to use SwitchRole to access it from the console. Overall configuration with EXTERNAL Component (Element) Name Overview Terraform A product for coding various services, AWS among them. IaC. In-house design patterns and modules are created with Terraform. GitHub Actions The CI/CD tool included in GitHub. At KINTO Technologies, we utilize GitHub Actions for tasks such as building and releasing applications We use a pipeline in GitHub Actions to deploy new applications and transition from the old ones. ECS (Elastic Container Service) We use ECS as the runtime environment for our applications For configuration, you can set the DeploymentController to ECS, CODE_DEPLOY, or EXTERNAL; this example specifically implements it with EXTERNAL. DeploymentController We view this as a kind of control plane for ECS (or at least, that’s how we see it internally). TaskSet A collection of tasks linked to the ECS services. You can create one via the CLI, but apparently not via the console. Using this enables you to create multiple task definition versions in parallel for a single service. ( CLI reference .) Setting this up requires an ALB, Target Group, and several other components, so there are quite a few configurations involved. ALB ListenerRule A rule for directing resources to Target Groups within the ALB. In Blue/Green deployment, modifying this link toggles the traffic flow between the old and new applications. Restrictions The DeploymentController in ECS can only be set during service creation, meaning it cannot be modified for existing services. When using EXTERNAL, the platform version isn’t fixed by the service; it’s specified when creating a TaskSet. The service startup type is fixed to EC2. However, if you specify Fargate when creating a TaskSet, the task will be started up with Fargate. Implementation Terraform At KINTO Technologies, we use Terraform as the IaC. We've also turned it into a module, and here, I'll outline the key points to be mindful of that arose during the module modifications. ListenerRule Using GitHub Actions, we modify the ListenerRule to update the TargetGroup, so we configure ignore_change to prevent unnecessary updates. ECS service NetworkConfiguration LoadBalancer ServiceRegisteries For EXTERNAL settings, these three options cannot be configured. If you’re using Dynamic or similar settings, ensure that these options are not created. In this case, it won’t be registered in CloudMap, so if you plan to integrate it with AppMesh or similar services, you’ll need to account for this. There’s no issue with using AppMesh for communication between ECS services, even if one of them is configured with a Blue/Green deployment setup. Since the Blue/Green deployment runs in parallel, if it were registered in CloudMap and allowed communication, it could result in unintended or erroneous access. Therefore, we believe this current setup is likely the correct behavior. IAM policy for roles for CI/CD In addition to the ECS system, various other permissions are also required. A sample is as follows. resource "aws_iam_policy" "cicd-bg-policy" { name = "cicd-bg_policy" path = "/" policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = [ "iam:PassRole" ] Effect = "Allow" Resource = "arn:aws:iam::{ACCOUNT}:role/{ROLE名}" }, { Action = [ "ecs:DescribeServices" ] Effect = "Allow" Resource = "arn:aws:ecs:{REGION}:{ACCOUNT}:service/{ECS_CLUSTER_NAME}/{ECS_SERVICE_NAME}" }, { Action = [ "ecs:CreateTaskSet", "ecs:DeleteTaskSet" ] Effect = "Allow" Resource = "*" conditions = [ { test : "StringLike" variable = "ecs:service" values = [ "arn:aws:ecs:{REGION}:{ACCOUNT}:service/{ECS_CLUSTER_NAME}/{ECS_SERVICE_NAME}" ] } ] }, { Action = [ "ecs:RegisterTaskDefinition", "ecs:DescribeTaskDefinition" ] Effect = "Allow" resources = ["*"] }, { Action = [ "elasticloadbalancing:ModifyRule" ] Effect = "Allow" Resource = "arn:aws:elasticloadbalancing:{REGION}:{ACCOUNT}:listener-rule/app/{ALB_NAME}/*" }, { Action = [ "elasticloadbalancing:DescribeLoadBalancers", "elasticloadbalancing:DescribeListeners", "elasticloadbalancing:DescribeRules", "elasticloadbalancing:DescribeTargetGroups" ] Effect = "Allow" Resource = "*" }, { Action = [ "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups" ] Effect = "Allow" resources = ["*"] }, ] }) } Please replace the ECS cluster name, ECS service name, and ALB name with the appropriate values. Ensure these align with the scope of the CI/CD roles and any applicable permissions. Permissions for CreateTaskSet and DeleteTaskSet are not restricted by specific resources. Instead, the service launched is defined by a fixed condition. The DescribeLoadBalancers permissions, along with ec2 and DescribeSecurityGroups permissions, are included in the workflow to determine status information. elasticloadbalancing:ModifyRule is, needless to say, necessary for rewriting the ListenerRule for release. The ListenerRule is scoped specifically to the ALB name since the ARNs are assigned random values. GitHub Actions At KINTO Technologies, we use GitHub Actions for our CI/CD tool. Our process involves developing standardized CI/CD workflows within the Platform Group and then supplying them to the app development teams. Workflow overview In the workflows for this project, we created a Blue/Green deployment system according to the steps below. In this article, we will only cover the deployment workflow. Key considerations and points of caution As the provider of these workflows to the app development teams, we paid close attention to the following key points: An implementation that minimizes parameter specification at runtime to reduce the risk of errors or misoperations. Since these workflows require manual execution, all parameters that can be retrieved via the CLI are gathered within the workflows themselves. This approach ensures that incorrect parameters aren’t specified at runtime. Simplified workflow setup Implementation that uses secrets as little as possible The AWS resource names are set through environment variables, with fixed values used for all except system-specific ones. This approach minimizes the need for configuration. Registering all the ARNs for the AWS resources to be used as secrets will render in-workflow processing to obtain the ARNs from the resource names unnecessary, reducing the amount of code. To minimize the initial configuration workload, we implemented a CLI-driven process that retrieves and uses ARNs from resource names, requiring almost no manual configuration. Workflow implementation Here, we would like to explain the main processes of each workflow using sample code. All the workflows are basically Get the AWS Credentials → Get the required parameters via the CLI → Do validation checks → Run or something similar. Creating the task set The runtime parameters for the workflow are the image tags and environments in the ECR (Elastic Container Registry). Before creating the task set, perform validation checks to ensure that the target group is suitable for testing and that the image tags for the runtime parameters exist in the ECR. After that, create the task definition from the image tags. Once the task definition has been created, you get the parameters (the subnets, security groups, and task definition) that will be needed when creating the task set, then run the CLI to create it. jobs: ... ## Check the target group to be used check-available-targetGroup: ... ## Create the task definition from the ECR images deploy-task-definition: ... ## Create the task set create-taskset: runs-on: ubuntu-latest needs: deploy-task-definition steps: # Get the AWS Credentials - Set AWS Credentials  ... - Get the target group ... # Create the task set - name: Create TaskSet run: | # Get the task definition ARN taskDefinition=`aws ecs describe-task-definition\ --task-definition ${{ env.TASK_DEFINITION }}\ | jq -r '.taskDefinition.taskDefinitionArn'` echo $taskDefinition # Get the subnets subnetList=(`aws ec2 describe-subnets | jq -r '.Subnets[] | select(.Tags[]?.Value | startswith("${{ env.SUBNET_PREFIX }}")) | .SubnetId'`) if [ "$subnetList" == "" ]; then echo !! Unable to get the subnets, so processing will be aborted. exit 1 fi # Get the security groups securityGroupArn1=`aws ec2 describe-security-groups | jq -r '.SecurityGroups[] | select(.Tags[]?.Value == "${{ env.SECURITY_GROUP_1 }}") | .GroupId'` if [ "$securityGroupArn1" == "" ]; then echo !! Unable to get the security groups, so processing will be stopped. exit 1 fi securityGroupArn2=`aws ec2 describe-security-groups | jq -r '.SecurityGroups[] | select(.Tags[]?.Value == "${{ env.SECURITY_GROUP_2 }}") | .GroupId'` if [ "$securityGroupArn2" == "" ]; then echo !! Unable to get the security groups, so processing will be stopped. exit 1 fi echo --------------------------------------------- echo Creating the task set aws ecs create-task-set\ --cluster ${{ env.CLUSTER_NAME }}\ --service ${{ env.SERVICE_NAME }}\ --task-definition ${taskDefinition}\ --launch-type FARGATE\ --network-configuration "awsvpcConfiguration={subnets=["${subnetList[0]}","${subnetList[1]}"],securityGroups=["${securityGroupArn1}","${securityGroupArn2}"]}"\ --scale value=100,unit=PERCENT\ --load-balancers targetGroupArn="${createTaskTarget}",containerName=application,containerPort=${ env.PORT } Switching listener rules The workflow for switching listener rules begins by retrieving and verifying the number of task sets currently running. If only the production environment’s task set is running (with a single task set), and you switch between the listener rules for the production and test environments, the task set associated with the production environment will be removed. To prevent this issue, our implementation checks the number of running task sets. If there is only one or fewer, the process halts without switching listener rules. After that, it switches between the production and test listener rules. Since there is no CLI command for switching between two listener rules, we are calling it switching, but precisely speaking, you run a CLI command that changes the listener rule (modify-rule). Since each listener rule change is processed in parallel, we use a sleep command to adjust processing times. This ensures that both listener rules don’t end up linked to the test environment due to minor timing differences. env: RULE_PATTERN: host-header ## http-header / host-header / path-pattern / source-IP, etc. PROD_PARAM: domain.com TEST_PARAM: test.domain.com ... jobs: ## If there one task set or less running, make it so that the host header cannot be changed check-taskSet-counts: runs-on: ubuntu-latest steps: ## Get the AWS Credentials - name: Set AWS Credentials ... # Validation - name: Check TaskSet Counts run: | taskSetCounts=(`aws ecs describe-services --cluster ${{ env.CLUSTER_NAME }}\ --service ${{ env.SERVICE_NAME }}\ --region ${{ env.AWS_REGION }}\ | jq -r '.services[].taskSets | length'`) if [ "$taskSetCounts" == "" ]; then echo !! Unable to get the number of running task sets, so processing will be aborted. exit 1 fi echo Number of running task sets: $taskSetCounts if [ $taskSetCounts -le 1 ]; then echo !! The number of running task sets is 1 or less, so processing will be aborted. exit 1 fi ## Switch between ALB listener rules (production, test) change-listener-rule-1: runs-on: ubuntu-latest needs: check-taskSet-counts steps: ## Get the AWS Credentials - name: Set AWS Credentials ... - name: Change Listener Rules run: | # Get the ALB ARN from the ALB name albArn=`aws elbv2 describe-load-balancers --names ${{ env.ALB_NAME }} | jq -r .LoadBalancers[].LoadBalancerArn` # Get the listener ARN from the ALB ARN listenerArn=`aws elbv2 describe-listeners --load-balancer-arn ${albArn} | jq -r .Listeners[].ListenerArn` # Get the listener rule ARN from the listener ARN listenerRuleArnList=(`aws elbv2 describe-rules --listener-arn ${listenerArn} | jq -r '.Rules[] | select(.Priority != "default") | .RuleArn'`) pattern=`aws elbv2 describe-rules --listener-arn ${listenerArn}\ | jq -r --arg listener_rule ${listenerRuleArnList[0]} '.Rules[] | select(.RuleArn == $listener_rule) | .Conditions[].Values[]'` if [ "$pattern" == "" ]; then echo !! Unable to get the listener rule, so processing will be stopped. exit 1 fi echo --------------------------------------------- echo Current rule pattern: $pattern echo --------------------------------------------- if [ $pattern == "${{ env.TEST_PARAM }}" ]; then aws elbv2 modify-rule --rule-arn ${listenerRuleArnList[0]} --conditions Field="${{ env.RULE_PATTERN }}",Values="${{ env.PROD_PARAM }}" else sleep 5s aws elbv2 modify-rule --rule-arn ${listenerRuleArnList[0]} --conditions Field="${{ env.RULE_PATTERN }}",Values="${{ env.TEST_PARAM }}" fi echo --------------------------------------------- echo Rule pattern after change aws elbv2 describe-rules --listener-arn ${listenerArn}\ | jq -r --arg listener_rule ${listenerRuleArnList[0]} '.Rules[] | select(.RuleArn == $listener_rule) | .Conditions[].Values[]' ## Switch between ALB listener rules (production, test) change-listener-rule-2: ... The processing is the same as for change-listener-rule-1, and only the specification of listenerRuleArnList elements differs ... Deleting the task set In the task set deletion workflow, the only runtime parameters are the environments. If you specify the task set ID to be deleted as a parameter, the workflow only requires a single CLI command to delete that task set ID. This simplifies the process to a single line, aside from obtaining AWS credentials and other setup steps. However, if you accidentally specify a task set ID that is currently in production, there is a risk that the production task set could be deleted, leaving only the test environment active. Therefore, we implemented a solution where the runtime parameters are limited to the environments only. The workflow retrieves and deletes the task set for the test environment directly within the workflow implementation. env: TEST_PARAM: test.domain.com # Host header for testing ... jobs: ## Delete the task set delete-taskset: runs-on: ubuntu-latest steps: ## Get the AWS Credentials - name: Set AWS Credentials ... # Get the target group linked to the test host header - name: Get TargetGroup run: | # Get the ALB ARN from the ALB name albArn=`aws elbv2 describe-load-balancers --names ${{ env.ALB_NAME }} | jq -r .LoadBalancers[].LoadBalancerArn` # Get the listener ARN from the ALB ARN listenerArn=`aws elbv2 describe-listeners --load-balancer-arn ${albArn} | jq -r .Listeners[].ListenerArn` # Get the target group linked to the test rules from the listener’s ARN and the test host header testTargetGroup=`aws elbv2 describe-rules --listener-arn ${listenerArn}\ | jq -r '.Rules[] | select(.Conditions[].Values[] == "${{ env.TEST_PARAM }}") | .Actions[].TargetGroupArn'` echo "testTargetGroup=${testTargetGroup}" >> $GITHUB_ENV # Get the task set ID linked to the test host header’s target group by the listener rules - name: Get TaskSetId run: | taskId=`aws ecs describe-services\ --cluster ${{ env.CLUSTER_NAME }}\ --service ${{ env.SERVICE_NAME }}\ --region ${{ env.AWS_REGION }}\ | jq -r '.services[].taskSets[] | select(.loadBalancers[].targetGroupArn == "${{ env.testTargetGroup }}") | .id'` if [ "$taskId" == "" ]; then echo !! Unable to find the tasked set linked to the test host header’s target group, so processing will be aborted. exit 1 fi echo The task set ID to be deleted echo $taskId echo "taskId=${taskId}" >> $GITHUB_ENV # Delete the task set from the task set ID obtained - name: Delete TaskSet run: | aws ecs delete-task-set --cluster ${{ env.CLUSTER_NAME }} --service ${{ env.SERVICE_NAME }} --task-set ${{ env.taskId }} Next steps We plan to refine the ALB ListenerRule component and explore enabling a canary release, but first, we need user feedback. For now, we are rolling it out to the application side to gather insights and improvements. In our GitHub Actions workflows, we minimized the use of secrets as much as possible. However, they still require setting numerous environment variables, and we aim to reduce this dependency in the future. For instance, we could potentially configure it so that only system-specific values are set via environment variables, minimizing the need for additional variable settings. We are also looking into whether we can switch between listener rules safely and instantaneously. Impressions As mentioned earlier, there are likely very few real-world examples of Blue/Green deployment with ECS + EXTERNAL (using GitHub Actions). We’ve reached this point by building a system from scratch, with no existing documentation to guide us. In hindsight, while implementing GitHub Actions workflows wasn’t inherently difficult, we were able to come up with several effective ideas to create workflows that are both straightforward (with minimal setup) and safe to use. Looking ahead, we aim to enhance this system by having people use it and then refining it based on their feedback Summary The Operation Tool Manager Team oversees and develops tools used internally throughout the organization. We leverage tools and solutions created by other teams within the Platform Group. Based on the company's requirements, we either develop new tools from scratch or migrate existing components as needed. If you’re interested in these activities or would like to learn more, please don’t hesitate to reach out to us. @ card
アバター
A Kotlin Engineer’s Journey Building a Web Application with Flutter in Just One Month Hello. I am Ohsugi from the Woven Payment Solution Development Group. Our team generally engages in server-side programming with Kotlin/Ktor and is currently working at Woven by Toyota Inc. on the development of the payment system used in Toyota Woven City . Working in cooperation with our in-house business teams and partners to build Woven City, we have repeatedly conducted proofs of concept (PoC) to expand the features of the payment system. Recently, we implemented a first PoC for the payment system, simulating its operation in actual retail stores. In this article, I would like to introduce how we decided to adopt Flutter to develop client apps as part of the PoC. Introduction To conduct the PoC for retail sales operations, we developed the following features to support store functions in addition to the payment system: Product management for store inventory Point-of-sale (POS) cash registers, including: Product scanning Shopping cart functionality Sales reports and payment management Inventory tracking In particular, to regularly update tens of thousands of product information items, report sales and conduct month-end inventory checks, we needed more than just a payment API —we required a GUI application accessible to non-technical store staff. This was what prompted us, who normally focuses on server-side development, to suddenly start the challenge to create a client application. Selecting a Language and Framework When developing the client application, we narrowed down our choices to a cross-platform framework that would allow for application development not only for the web but also for iOS/Android. Language / framework Reasons for selection Dart / Flutter , Flutter on the web - This is a trending technology that has been getting significant attention recently. - Having also been adopted by the in-house Mobile App Development Team, members across teams are very familiar with this language and framework. TypeScript / Expo (React Native) , Expo for web - In terms of web development, this choice would enable us to move forward with React, which is one of the most mature technologies out there. - Our team members have experience with React, so ramp-up time would be minimal. Kotlin / Compose Multiplatform , Compose for web With few existing adoption examples, we have the opportunity to explore more innovative development approaches. - There are no team members with direct development experience, but it should be straightforward for those familiar with Kotlin. Technical validation In order to select a language and framework, we conducted a technical evaluation by creating a web app that combines state management and screen transitions, which are important elements for client app development. The app we created is very simple: pressing the + or - button increases or decreases a count on the screen on the left (Home Page), and then pressing the "next" button navigates to the right screen (Detail Page), where the count is displayed. For each language/framework combination, we looked at the differences in the development experience it would offer in terms of how UI components are implemented, performance, libraries, documentation, and community support. How UI components are implemented First, we compared Flutter on the web, Expo for web, and Compose for web using the Detail Page code on the right of the image above as an example. Dart / Flutter on the web I find it very intuitive as you can implement the UI using object-oriented components rather than the DOM. You can use virtually the same code for both mobile and web apps. Material Design is applied by default for styling, which has its pros and cons, but is a real boon in situations where engineers need to handle design too. When rendering with Canvaskit, it's possible to achieve nearly identical UI appearance. class DetailPage extends StatelessWidget { const DetailPage({super.key}); @override Widget build(BuildContext context) { final args = ModalRoute.of(context)!.settings.arguments as DetailPageArguments; return Scaffold( appBar: AppBar( title: const Text("Flutter Demo at Detail Page"), ), body: Center( child: ConstrainedBox( constraints: const BoxConstraints(minWidth: 120), child: Center( child: Text( args.value.toString(), style: const TextStyle(fontSize: 72), ), ), ), ), ); } } TypeScript/Expo Like Flutter, UI can be implemented with object-oriented components instead of the DOM, making it feel very intuitive. As the downside of this combination, the framework provides the minimum components, requiring you to implement additional ones on your own. The same code can be used for both mobile and web with minimal differences. Styling is done with StyleSheet, a syntax similar to CSS, you may not feel it is so difficult as you do with CSS as the scope applied to the app is limited. The sample app uses react-navigation to implement screen transitions. const DetailPage: React.FC = () => { // from react-navigation const route = useRoute<RouteProp<RootStackParamList, 'Detail'>>(); return ( <View> <Header title={'Expo Demo at Detail Page'} /> <CenterLayout> <Counter value={route.params.value}/> </CenterLayout> </View> ); } const Header : React.FC<{title: String}> = (props) => { const {title} = props; return ( <View style={styles.header}> <Text style={styles.title}> {title} </Text> </View> ) } const CenterLayout: React.FC<{children: React.ReactNode}> = (props) => { const {children} = props; return ( <View style={styles.layout}> {children} </View> ) } const Counter: React.FC<{value: number}> = (props) => { const {value} = props; return ( <View style={styles.counterLayout}> <Text style={styles.counterLabel}>{value}</Text> </View> ) } const styles = StyleSheet.create({ header: { position: "absolute", top: 0, left: 0, width: '100%', backgroundColor: '#20232A', padding: '24px 0', }, title: { color: '#61dafb', textAlign: 'center', }, layout: { display: 'flex', flexDirection: 'row', justifyContent: 'center', alignItems: 'center', height: '100vh', }, counterLayout: { minWidth: 120, textAlign: 'center' }, counterLabel: { fontSize: 72, } }); Kotlin / Compose for web Instead of using the Compose UI used on mobile and desktop, we implement the UI using web-specific components that wrap around the HTML DOM. Code cannot be reused across mobile and web Styling needs to be implemented in CSS. For component implementation, you can either define the properties of each component from scratch or use pre-defined components as StyleSheet objects. To implement screen transitions, the sample app uses the routing-compose library for Compose Multiplatform, which supports both web and desktop. @Composable fun DetailPage(router: Router, params: Map<String, List<String>>?) { Div { components.Header(title = "Compose for web Demo at Detail Page") CenterLayout { params?.get("value")?.get(0)?.let { Counter(it.toInt()) } } } } @Composable fun Header(title: String) { H1(attrs = { style { position(Position.Fixed) top(0.px) left(0.px) paddingTop(24.px) paddingBottom(24.px) backgroundColor(Color("#7F52FF")) color(Color("#E8F0FE")) textAlign("center") width(100.percent) } }) { Text(title) } } @Composable fun CenterLayout(content: @Composable () -> Unit) { Div(attrs = { style { display(DisplayStyle.Flex) flexDirection(FlexDirection.Row) justifyContent(JustifyContent.Center) alignItems(AlignItems.Center) height(100.vh) } }) { content() } } @Composable fun Counter(value: Int) { Span(attrs = { style { minWidth(120.px) textAlign("center") fontSize(24.px) } }) { Text(value.toString()) } } Performance Next, we compared the build times and bundle sizes for the sample app created with each language and framework. In building the app, we set optimization options by default. We set up the testing environment on a MacBook Pro 2021 with an M1 Pro CPU and 32 GB of memory. Language / Framework Build conditions Build time Bundle size Dart / Flutter on the web - Flutter v3.7.7 - Dart v2.19.2 14 s 1.7 MB (CanvasKit) 1.3 MB (Html) Typescript / Expo for web - TypeScript v4.9.4 - Expo v48.0.11 10 s 500 KB Kotlin / Compose for web - Kotlin v1.8.10 9 s 350 KB As you can see, the bundle size for the sample app functionality with Flutter is about 10 times larger than that with React, which means that the initial rendering will probably take quite a long time. You can check the details of the JS code generated by Flutter by adding --dump-info to the build options, which helped us to see that the code mainly contains the Dart and Flutter framework part. Libraries, documentation, and community support Lastly, I have put together some information on the libraries, documentation, and community support among other things, for each language-framework combination. Language / Framework Libraries Documentation / Community support Dart / Flutter on the web With Flutter packages , you can search for libraries available for Flutter. Libraries marked with the Flutter Favorite logo are officially recognized for their popularity and ease of use. The official documentation and videos are comprehensive, and the website also provides recommended libraries and design guidelines for state management, etc. Typescript / Expo for web The basic libraries are fairly extensive, and the de facto standard ones are also easy to find if you search for them. The maintenance of each library relies to a large extent on the community, so you need to be careful when choosing the most suitable language-framework combination for you. For basic implementations, there is a rich selection of React official documentation and Expo official documentation . Regarding effective design guidelines, including library design ones, it looks good to us to refer to the React discussions on the web. Kotlin / Compose for web You can use a wide variety of JVM libraries. However, Android and Compose UI-related libraries are often not available in Compose for web. There is not very much documentation, so you need to either search the GitHub repositories , or search the community’s Slack channel for information. The Adoption of Flutter Based on the technical evaluation described above, we chose Flutter as the technology stack for client app development in the PoC. The reasons are as follows: Even team members unfamiliar with client app development can easily work with Flutter, as it has comprehensive documentation and reference materials, shich should minimize the impact on our primary server-side development work. The framework is actively being developed and well-maintained, so it is easy to upgrade versions and introduce libraries. Given the characteristics of the PoC, the app will run in a stable network environment, so performance limitations are not a significant concern. Additionally, though it may sound as an added justification, being able to run JavaScript on Dart was very reassuring when encountering issues that couldn't be solved with Flutter alone. Our system uses Keycloak as the authentication platform, and since Keycloak’s official repository does not currently provide a Flutter client library, we are handling the authentication by running a JS library on Dart. Conclusion In this article, I introduced the reasons behind our decision to adopt Flutter for the development of the client app used in the PoC. Currently, we are also developing the client app in parallel with our server-side development. We would like to update this blog with more information as we deepn our technical knowledge in the future.
アバター
Introduction I am Kinoshita, a prototyping engineer at KINTO Technologies. To kick off an upcoming series on Agile here on the blog, I’ll start by sharing a quick update on renewing my Scrum Inc. Registered Scrum Master qualification. If you are interested in how to become a Registered Scrum Master and what the seminar contains, please read this previous article I wrote on this topic. When I wrote the previous article, the certification was called Licensed Scrum Master (LSM), but on July 29, 2022, it was renamed to Registered Scrum Master (RSM). It seems the license name was automatically updated, as my certification had also changed to RSM (Registered Scrum Master). I’ve added this name change to the previous article as well. The Renewal A year after obtaining the license, you receive an email notifying you that its expiration date is approaching and that it will become invalid unless renewed. However, I did not notice the email myself, so I was unaware that the renewal deadline was coming up until a colleague who attended the previous seminar with me mentioned it. You have 60 days to decide whether to renew, and if you choose to proceed, you need to go to the members’ site, pay the renewal fee and unlock the renewal exam. I had initially thought the renewal would cost $50 per year, but it turns out there are also options for five-year and lifetime plans. This time, it seems there was a discount, reducing the five-year plan from $250 to $199 and the lifetime plan from $500 to $399. At KINTO Technologies, subsidies cover seminars but not certifications, so I would be covering this renewal cost myself. Even with the discount, with the yen as weak as 138 to the dollar (at the time of payment), brought the costs to approximately 27,500 yen for the $199 plan and around 55,000 yen for the $399 one . The prices didn’t seem so daunting in dollars, but once I converted them to yen, I felt a sharp pain in both my wallet and my heart. Why I Renewed Given that I had very few opportunities to apply what I had learned about Scrum and would have to pay out of my own pocket, I honestly wasn't eager to renew. In addition to the difficulty of getting stakeholders to understand Agile and dispel their resistance to it, and to the fact that the teams and groups are so large, not everyone wants to do Agile and Scrum either. It felt like a pretty tall order from the start. As a result, I strongly felt that it would be important to involve people around me in order to find like-minded individuals to help create a more supportive atmosphere for it, even if it was just a small step at a time. In the end, fostering a "Let's do it!" mindset would be far more important than whether or not I had a license. What changed my attitude—and also inspired me to write this article—was the expansion of my network within the company, which provided me with numerous opportunities to discuss Scrum Master topics with people I had never met before. One of these was a chance to talk about it with other teams in a roundtable discussion held thanks to avid Agile enthusiast Kin-chan really hitting the ground running after joining the company. Listening to them made me regret having kept it all pent up inside, and shifted me a little bit back toward wanting to figure out how to solve my own similar concerns. It dawned on me that having the license would continue to expand my circle, which might, in turn, increase the opportunities to put it to use. These thoughts made me more inclined to renew. A major factor was that, almost eerily at the last minute, a colleague at the company reached out after reading my previous article. They knew how much the renewal would cost but strongly encouraged me to go ahead and renew anyway. Prompted by this encouragement, and also figuring that if I was going to do it then I might as well go all in, I opted for the lifetime plan. Still torn between wanting to and not wanting to renew, my head was full of the pain in my wallet even during the exam, but despite that, I managed to pass it without a hitch. To ease the sting on my heart and wallet, I took one of the souvenir candies someone had left out in the office. As I ate it, I treated it like a 55,000-yen indulgence, savoring every bite. The colleague who had attended the seminar with me last time had also renewed, but apparently opted for the annual renewal. They mentioned that when they took the exam, there was quite a bit they had forgotten, so they were glad they had chosen the yearly renewal, as it gave them a chance to review everything again. About the Renewal Exam You get not just one, but two chances to take the renewal exam, just like with the exam after the seminar last time. After answering all the questions, you get shown your score and which ones you got right and wrong, so you can see where you made mistakes. The content and difficulty of the questions felt the same level as the ones after the seminar last year. If you pass, you get an email to tell you, and the expiry date displayed beneath the official mark in the bottom right of the certificate changes to “Valid Until Lifetime.” So, I am now a Registered Scrum Master for life and will never need to take the renewal exam again. I no longer have to worry about whether to renew it every time it expires. Conclusion, and a Plug A year had passed since I last took the seminar, and it was time to decide whether to renew my license. Personally, I didn’t feel that the license had proven its value over the past year. However, I started to see its worth for a reason I hadn’t originally considered: it opened up opportunities to connect with people. With that in mind, I decided to renew. At KINTO Technologies, many other teams, projects, and products are embracing the challenge of Agile and Scrum, in addition to those involved in the roundtable discussion. Our avid Agile enthusiast Kin-chan will cover these topics in the upcoming series on Agile I mentioned earlier. Kin-chan has a broad view of Agile, unbound by any specific development framework, and has already taken passionate steps to champion it within the company multiple times. Personally, I’m looking forward to the series offering a wide range of perspectives. So, it is definitely something to look forward to.
アバター
はじめに KINTOテクノロジーズ のプラットフォームエンジニアリングチームでは、現在のログソリューションに完全には満足していませんでした。そんな時、新しい AWS サービスを活用することでログプラットフォームを使いやすくし、コストも削減できるチャンスを見つけました。まさに一石二鳥です! もちろん、既存のシステムをすべて取り壊して、新しいサービスに置き換えることはできません。それは、まだ走行中の車のエンジンを交換するようなものです!新しいサービスをどのように使い、どのように設定すれば私たちのニーズに合うかを調査する必要がありました。今回の新しいログプラットフォームとして OpenSearch Serverless を検討する中で、アラートシステムに関する解決策が必要でした。現在、OpenSearch クラスターの Alerting 機能を使用していますが、この機能はサーバーレスインスタンスでは使用できません。 幸い、AWS Managed Grafana バージョン 9.4 では、Grafana の OpenSearch プラグインが OpenSearch Serverless インスタンスをデータソースとして使用できるようになっており( Grafana Opensearch プラグインページ 参照)、 Grafana をアラートのために活用できるようになりました!しかし、両方のサービスをどのように設定すればうまく連携できるかを考える必要がありました。 調査の段階では、すでにOpenSearch Serverlessインスタンスを作成し、使用したいすべてのソースからのログインジェストをテスト済みでした。残されたタスクは、サンドボックスにテスト用の Grafana インスタンスをセットアップし、サーバーレスインスタンスをデータソースとして設定することでした。 本記事を書いている時点では、AWS ドキュメントにはこの手順に関する詳細な説明がありませんでした。エンジニアとして、すべての作業にステップバイステップのガイドが用意されているわけではありません。そのため、何が機能するかを探るために試行錯誤が必要でした。さらに、必要な権限を絞り込むため、 AWS サポートに協力を求め、Amazon Managed Grafana の内部チームと OpenSearch チームの両方にもリクエストをエスカレーションしてもらいました。ドキュメントがまだ整備されていないためです。これが、今回本記事を書いて知識を共有しようと考えた理由です。 続ける前に簡単に自己紹介:KINTOテクノロジーズでプラットフォームエンジニアをしているマーティンです。昨年、このチームに参加し、それ以来 AWS を使ったプロジェクトに断続的に関わってきました。 また、このプロジェクトに取り組む中で多くのことを学びました。今回その経験を皆さんと共有できることをとても嬉しく思います! プロジェクトから得た最大の学びは、AWS サポートが素晴らしいリソースであり、困ったときには遠慮せずにサポートを依頼するべきだということです。 環境の設定 本記事では、AWS コンソールを使ってすべて設定します。もちろん、お好きな Infrastructure as Code ツールを使って同じ設定を作成も可能です。本記事では、読者が AWS コンソールに精通していて、すでに Opensearch Serverless インスタンスが稼働していることを前提としています。 なお、本記事で紹介する設定はシンプルさを優先しているため、セキュリティ要件に応じて設定を見直し、調整することを強くお勧めします。 IAM ロールの設定 最初に、Grafana インスタンスで使用するための IAM ロールを作成する必要があります。もし Grafana ワークスペースで他の AWS サービスを利用する予定がある場合、Grafana ワークスペース作成時に [サービス管理] オプションを選択したほうがよいかもしれません。その後、AWS が作成したロールを更新するか、Grafana のデータソース設定時にカスタムロールのARNを指定できます。 IAM ロールを作成する際に必要な信頼ポリシーは以下のとおりです。 { "Version":"2012-10-17", "Statement": [ { "Effect":"Allow", "Principal": { "Service": "grafana.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } AWS サービスの「Trusted entity type (信頼されたエンティティのタイプ)」で AmazonGrafana を選択することで、同じ信頼ポリシーを取得できます。(使用例のセクションで選択します。) GrafanaからOpenSearch Serverlessにアクセスするために必要な権限ポリシーは以下のとおりです。GrafanaチームとOpenSearchチームに最小限の必要な権限を提供するようリクエストをエスカレーションしてくださったAWSサポートチームに、感謝します。 { "Statement": [ { "Action": [ "es:ESHttpGet", "es:DescribeElasticsearchDomains", "es:ListDomainNames" ], "Effect":"Allow", "Resource": "*" }, { "Action": "es:ESHttpPost", "Effect":"Allow", "Resource": [ "arn:aws:es:*:*:domain/*/_msearch*", "arn:aws:es:*:*:domain/*/_opendistro/_ppl" ] }, { "Action": [ "aoss:ListCollections", "aoss:BatchGetCollection", "aoss:APIAccessAll" ], "Effect":"Allow", "Resource": [ "arn:aws:aoss:<YOUR_REGION>:<YOUR_ACCOUNT>:collection/*" ] } ], "Version":"2012-10-17" } OpenSearch アクセスポリシー OpenSearch 側では、新しく作成した IAM ロールに対してデータアクセスポリシーを追加する必要があります。IAM ロールに OpenSearch へアクセスするための必要な権限を付与していたとしても、コレクション内のデータにアクセスできるようにするためには、データアクセスポリシーを作成する必要があります。詳細については、 AWS のドキュメント を参照してください。 OpenSearch サービスページのメニューで、のサーバーレスセクションにある「データアクセスポリシー」を選択し、「アクセスポリシーの作成」ボタンをクリックします。アクセスポリシーに名前と説明を追加し、ポリシー定義方法として JSON を選択します。以下のポリシーは、Grafana Opensearch プラグインのドキュメントから引用したものです。 [ { Rules = [ { ResourceType = "index", Resource = [ "index/<NAME_OF_YOUR_OPENSEARCH_INSTANCE>/*" ], Permission = [ "aoss:DescribeIndex", "aoss:ReadDocument" ] }, { ResourceType = "collection", Resource = [ "collection/<NAME_OF_YOUR_OPENSEARCH_INSTANCE>" ], Permission = [ "aoss:DescribeCollectionItems" ] } ], Principal = [ <GRAFANA_IAM_ARN> ] Description = "Read permissions for Grafana" } ] OpenSearch Serverless デプロイメントの名前と、先ほど作成した IAM ロールの ARN を更新してください。 ネットワーキングの設定を少し行います Grafana インスタンスの作成を続ける前に、いくつかのネットワークリソースを作成します。 まず、OpenSearch Serverless デプロイメントと同じ VPC 内に 2 つのサブネットを作成しましょう。各サブネットは異なるアベイラビリティーゾーンに配置する必要があります。 サブネットが作成されたら、各サブネットのルートテーブルを更新し、0.0.0.0/0 からインターネットゲートウェイへの新しいルートを追加します。 次に、VPC からのインバウンド HTTPS トラフィックを許可し、0.0.0.0/0 へのすべてのアウトバウンドトラフィックを許可するセキュリティグループを作成します。 これらの設定が完了したら、Grafanaインスタンスを作成する準備が整いました! Grafana インスタンスの作成 コンソールの検索バーで Amazon Managed Grafana サービスを検索します。 サービスのホームページで、AWS エンジニアが配置した便利なボタンを使用して Grafana ワークスペースを作成します。 作成ページの最初のステップで、Grafana ワークスペースの名前と説明を設定します。バージョンを 少なくとも9.4 以上に設定してください。最新バージョンは10.4なので、私はそちらを使用します。 次のページの認証方法の選択では、お好みの認証方法を選んでください。私は AWS IAM Identity Center を選びます。 Permission type (権限の種類)では、Customer managed を選択し、先ほど作成した IAM ロールの ARN を指定します。Grafanaワークスペースを作成した後、選択したロールとは別のIAMロールが使われてしまうという奇妙な問題が発生しました。そのため、正しいロールを使うようにワークスペースを更新する必要がありました。これはバグか、私の設定ミスかもしれませんが、本記事では、私は間違いなく正しいロールを選んだということにしておきます。そして、バグだったということにしましょう。OKでしょうか?Great! 進めましょう! Outbound VPC connection (アウトバウンドVPC接続)のセクションでは、OpenSearch Serverless インスタンスがデプロイされているのと同じ VPC を選択します。Mapping and Security Groups (マッピングとセキュリティグループ)では、先ほど作成したサブネットとセキュリティグループを選択します。 Workspace configuration options (ワークスペース構成オプション)のセクションでは、Turn plugin management on (プラグイン管理を有効にする)を必ず選択してください。 このチュートリアルでは、 Network access control (ネットワークアクセス制御)のセクションで Open Access (オープンアクセス)を選択します。 [次へ] ボタンをクリックして、設定を確認します。 ワークスペースが作成されたら、認証方法を設定してください。私は AWS IAM Identity Center を選択したので、自分のユーザーを追加し、管理者にします。 これで接続できるようになったはずです! Grafana と OpenSearch Serverless の連携 OpenSearch Serverless データソースを追加する前に、Grafana ワークスペースに OpenSearch プラグインをインストールする必要があります。これを行うには、次の手順に従ってください。 左側のメニューから、[Administration]を選択し、[Plugins and Data]、そして[Plugins] を選択します。 プラグインのページで、ページ上部のフィールドで「Installed」ではなく「All」を選択します。 OpenSearch プラグインを検索してインストールします。インストールが完了すると、OpenSearch プラグインページの右上に [Add new data source] ボタンが表示されます。クリックしてください。 次に、OpenSearch Serverless インスタンスに接続するためのデータソース情報を設定します。 HTTP セクション:URL フィールドに OpenSearch Serverless インスタンスの URL を入力します。 Auth セクション:SigV4認証をオンにして、OpenSearch Serverless インスタンスが配置されているリージョンを選択します。 OpenSearch Details セクション:Serverless をオンにして、使用するインデックスを設定します。 Logs セクション:メッセージフィールドとレベルフィールドの名前を設定します。 最後に、[Save & test] をクリックします。正常に接続されたことを確認するメッセージが表示されるはずです。これで、このデータソースを使用してアラートとダッシュボードを作成できるようになりました! さいごに 本記事が役に立ち、OpenSearch Serverlessをデータソースとして使用し、ご自身のGrafanaインスタンスをセットアップできるようになっていれば幸いです。 私たちKINTOテクノロジーズにとって、アラートにGrafanaを使用することは、新しいロギングソリューションにとって素晴らしい選択肢のように思えます。この設定により、堅牢で効率的、かつコストパフォーマンスに優れたログとアラートのソリューションを構築でき、私たちの仕様に合致するものとなります。個人的には、Grafanaでのアラートクエリの作成はOpenSearchと比べて、よりシンプルで柔軟だと感じました。 ちなみに、KINTOテクノロジーズのプラットフォームグループでは新しい仲間を募集しています!私たちは、常に優秀なエンジニアを募集しています。チームに参加することに興味がある場合や、私たちの仕事や職場環境について詳しく知りたい場合は、ぜひお気軽にお問い合わせください!私たちの求人一覧が掲載されたウェブページもありますので、ぜひご覧ください。 こちら
アバター
Introduction Hello, everyone. My name is Nakaguchi and I work in the Mobile App Development Group. How did you enjoy iOSDC Japan 2024? This year, it took place in August, and the excitement was higher than ever! It had the energy of a real festival!! This article is for those of you who: Those who attended iOSDC Those who are iOS Engineers Those who love attending conferences I hope you’ll enjoy reading it. Up until last year, KINTO Technologies participation in iOSDC was mostly voluntary. Employees who were interested would attend, and afterwards, they would share their learnings via lightning talks in an internal study session or write a tech blog post. But this year, KINTO Technologies approached iOSDC 2024 with a whole new attitude!! This year, we: "Became an official sponsor," “Submitted several proposals (and one was accepted, amazing!!🎉)," “Held a special event to reflect on iOSDC" . We went in with a packed agenda!! To wrap it all up, I'm writing this blog post! Sponsor’s Story This year, for the first time, KINTO Technologies became an official sponsor of iOSDC🙌!!! Our Tech Blog Team has evolved into a Technical PR Group, and we are putting even more effort into external events! In addition to iOSDC, which we participated in this time, we are also sponsoring DroidKaigi2024 and Developers Summit KANSAI. We’re actively participating and increasingly showing up at large conferences! At iOSDC, our Mobile App Development Group took the lead, receiving great support from our Technical Public Relations Group, the Creative Team, and other departments across the company. For more details, our team members have summarized it in a separate article and presented it at the iOSDC retrospective event, which I’ll discussed later. Do Check it out! [Tech Blog] Our First iOSDC Sponsor Diary Here we will mainly introduces the novelties and other deliverables we created for the event! Please take a look! https://blog.kinto-technologies.com/posts/2024-08-21-iOSDC2024-novelties/ [Tech Blog] KINTO Technologies is a Gold Sponsor of iOSDC Japan 2024 & The Challenge Token is here 🚙 This article includes interviews with our employees. Please also take a look! https://blog.kinto-technologies.com/posts/sponsored-iosdc-japan-2024/ [Presentation Slide] I would like to share our journey to becoming an iOSDC sponsor Here, we introduce how we proceed with our sponsorship in chronological order! If you’re interested in becoming a sponsor at a conference, this post will offer many valuable insights, so please check it out if you are interested!!! https://speakerdeck.com/ktchiroyah/iosdcchu-chu-zhan-matenisitashi-wogong-you-sitai The proposal story This year, for the first time, we held a company-wide proposal writing workshop🙌!!! Team members who were interested in presenting gathered, using these slides as references, and discussed how to write the presentation and what content to include, and came up with the following proposal! https://fortee.jp/iosdc-japan-2024/proposal/7fd624c8-06ec-4dc4-960a-da37f74cf90f https://fortee.jp/iosdc-japan-2024/proposal/a82414cd-54d7-4abb-aa20-e35feb717489 https://fortee.jp/iosdc-japan-2024/proposal/e9e13b6d-0b74-4437-8ec0-ba6598b70ad7 https://fortee.jp/iosdc-japan-2024/proposal/ab0eeedf-0d4f-47a6-8df8-bd792b4d70ca And the following proposal was selected!! Wow! It’s truly amazing! !🎉 https://fortee.jp/iosdc-japan-2024/proposal/25af110e-61d0-4dc8-aba5-3e2e7d192868 https://fortee.jp/iosdc-japan-2024/proposal/c3901357-0782-4fb5-89b8-cb48c473f066 After hearing examples from other companies, I realized that they had meetings to review their proposals, and their number of submissions was on a whole different level. We can’t afford to fall behind! Next year, I want to work even harder! Held a retrospective event for iOSDC Large-scale events like this often come with after-events, and last year, several companies hosted iOSDC retrospective events. And this year, we hosted our own event as well🙌!!! I've written a pretty enthusiastic blog about why I decided to hold the event, what happened leading up to it, what it was like on the day, and more, so please do take a look!!! https://blog.kinto-technologies.com/posts/2024-09-12-after-iosdc/ Below, I’ve summarized the sessions that the members who participated in the iOSDC attended. KINTO Technologies session viewing rankings We had 15 participants (including 4 vendors), and we’ve compiled a ranking of the sessions they watched. This gives you a good idea of the kind of technologies our company is currently interested in!! Tied for 2nd place (6 participants): Learning about typed throws in Swift 6 and the overall picture of error handling in Swift https://fortee.jp/iosdc-japan-2024/proposal/c48577a8-33f1-4169-96a0-9866adc8db8e The speaker explained not only what typed throws are but also compared them with untyped throws, which made it very easy to understand. At first glance, typed throws seemed promising, but I was glad they addressed the official statement that it shouldn’t be used too lightly. It was also insightful to hear the presenter Koher’s perspective. Tied for 2nd place (6 participants): Roundtable Discussion "Strict Concurrency and Swift 6 Open a New Era: How to embrace the new era?" https://fortee.jp/iosdc-japan-2024/proposal/5e7b95a8-9a2e-47d5-87a7-545c46c38b25 We were also researching Strict Concurrency for Swift 6, and this session was a extremely informative. I’d like to move forward with our plans based on what was presented there. Additionally, the roundtable discussion format was refreshing, and it was wonderful to see everyone supporting each other. I hope to see more presentations like this in the future. Tied for 2nd place (6 participants): Shared with Swift Package practices that accelerate development https://fortee.jp/iosdc-japan-2024/proposal/52d755e6-2ba3-4474-82eb-46d845b6772c Since we are developing multiple apps, the concept of a shared Swift Package is very appealing. However, there’s a dilemma because each app has different requirements, making it difficult to find common parts to share . On the other hand, I learned a lot about the steps to create a shared Swift Package, such as team structure and operation methods. Tied for 1st place (7 participants): Rookies LT Tournament https://fortee.jp/iosdc-japan-2024/proposal/95d397a6-f81d-4809-a062-048a447279b3 One of our team members gave a presentation, so we rushed over to cheer and support!! Cheering with penlights was a lot of fun!! The content of the talks was also very interesting, and some of our team members even said, ”I want to try it next year!” Tied for 1st place (7 participants): The Magic of App Clips: A New Era in iOS Design Development https://fortee.jp/iosdc-japan-2024/proposal/66f33ab0-0d73-479a-855b-058e41e1379b At our company, we haven’t yet introduced App Clip in any of our apps, so many team members were eager to try it out. However, some challenges, such as how to distribute App Clip code, are expected to arise. Below are the other sessions with high view counts. Watched by 4 people: A thorough explanation of various "ViewControllers" in iOS/iPadOS and implementation examples Unraveling what defines an iOS app LT Tournament (Second Half) Increased cross-platform adoption. Is iOS development with Swift fading away? An introduction to software development for tackling complexity Watched by 5 people Understanding the data standard for integrating My Number Card on iPhone Unleashing the future of ridesharing with GraphQL and Schema-first development In addition, the average number of sessions watched per person this time was 11.25!!! Bonus This year, we also set up a sponsor booth, and we were curious to know which booths left the biggest impression on attendees, so we conducted a survey! We received responses from 9 people, and here are the results. (Only booths with more than one vote are included.) We tallied the most memorable booths       When you look at the results, you can see that the votes were quite spread out. (I believe the 6 votes for our booth were out of kindness!) When I think about it, I realize how difficult it is to create a booth that appeals to everyone. In the midst of this, DeNA collecting 4 votes is truly impressive. Conclusion As mentioned at the beginning, the entire company was very enthusiastic about this year's iOSDC! Personally, I’m very satisfied with our sponsorship, proposals, and the retrospective event. However, there are still many areas for improvement, and I hope to level up even more and participate in iOSDC next year!! Additionally, just like every year, the sessions were extremely informative, and I’m really glad I participated.
アバター
Introduction Hello! My name is Ren.M from the Project Promotion Group at KINTO Technologies. My main role is to develop the front-end of KINTO ONE (Used Car) . In this article, rather than talking about technical stuff, I would like to tell you about our company’s in-house activities! Target audience of this article People who are interested in in-house club activities People who feel there is not enough communication between employees What are our in-house club activities? Our company fosters a culture of club activities, with a variety of in-house clubs, including a futsal club, a golf club, and more! Each club has its own dedicated channel on Slack, making it easy for anyone to join or participate at their discretion. In fact, some members enjoy being part of multiple clubs! I’m a member of the basketball club, which rents a gymnasium near the office for practice sessions lasting about three hours each evening. While gymnasium availability is determined by a lottery system, we consistently play every month without fail. To ensure smooth operations, we have volunteers who take on various responsibilities, including: Booking the gymnasium each month Handling payment for its usage Managing club expenses These tasks are shared among members on a volunteer basis. Once our reservation is confirmed, we announce it on Slack and invite participants! It depends on the day, but we usually have around ten participants joining us! Activity Scene What I have gained through club activities A refreshing break Our company is home to many engineers, and pretty much all of them do desk jobs. Also, because I sometimes work from home, no matter how hard I try, I am prone to not getting enough exercise. Participating in club activities provides me with a vital opportunity to exercise, refreshing both my mind and body. I find myself getting really passionate during practice, but I always make a conscious effort to avoid injury while having fun! More interaction with employees from other departments I believe this is one of the greatest strengths of our club activities. With employees from various departments participating, it provides a unique opportunity to connect with colleagues you don’t typically work with. In meetings, having prior relationships formed through club activities can facilitate smoother collaboration, as participants are not just strangers meeting for the first time. Additionally, I hope these activities help new employees feel more at home within our company Conclusion I hope this gives you a glimpse into what our club activities look like. I think in-house club activities are a positive culture that allows employees to refresh themselves while deepening their friendships! If you join our company, I encourage you to engage with colleagues through these activities! There are also many other articles about them on the Tech Blog, so please take a look if you're interested!
アバター
Self-introduction Hello. I am Sora Watanabe, a member of the SRE Team in the Platform Group at KINTO Technologies Corporation (from now on, KINTO Technologies). We contribute to improving the reliability of our company’s services by leveraging our experience in application development, infrastructure setup, and CI/CD for web services. Introduction No matter how great a service is, realistically, there is no way that problems will never occur. In today’s service delivery landscape, it’s essential to proactively set targets for an acceptable number of issues, and, in some cases, share these expectations with users to build consensus. Specifically, you define the levels of service using Service Level Indicators (SLIs), then set target values for them using Service Level Objectives (SLOs). Then, you obtain the users’ agreement to these target values through Service Level Agreements or SLAs. Having set your SLOs, the next step is to monitor for target violations. An alert needs to be triggered if a violation occurs. However, the rules for triggering alerts are prone to getting complex and difficult to manage. In order to solve this problem, in this article, I will introduce a way to streamline creating and managing alert rules by using an alert rule generator called Sloth. Background As I discussed in a previous article, at KINTO Technologies, we are using the stack “Prometheus + Grafana + X-Ray” to obtain telemetry data and improve the observability of our request-response-type web services. https://blog.kinto-technologies.com/posts/2022-12-09-AWS_Prometehus_Grafana_o11y/ Thanks to this, we are now successfully storing a wide variety of metrics in Prometheus, and for Spring Boot application metrics in particular, doing so without having to add any special instrumentation into the application code. The metrics stored include the success/failure status and response rate data on a per-request basis. This has enabled us to express SLIs for availability and latency using PromQL. Issues Typically, after establishing SLIs and SLOs for the web service’s Critical User Journey (CUJ), you then monitor the error budget usage for the deployed service. When doing so, you need to detect potential SLO/SLA violations at an appropriate time. To do that, you need to set up alerts to enable the developers to detect anomalies. According to The Site Reliability Workbook, multi-window, multi-burn-rate alerts are the most effective and recommended approach for detecting SLO violations. A key benefit is that they offer excellent control over precision, reproducibility, detection time, and reset time, making us eager to implement them actively. I will briefly explain about what “window” and “burn rate” mean: Window This means the measurement periods. These are the timeframes defined for when measurement begins and ends. Since SLOs are expressed as percentages, the service level resets to 100% at the start of each new measurement period when the previous one concludes. In general, the larger the window size, the harder it is to trigger and end alerts. Burn rate This refers to the rate at which the error budget is used up. Triggering an alert only after the error budget is fully consumed would be too late; ideally, an alert should be triggered once a certain portion of it has been used. The burn rate is calculated by defining a reference value of 1, which represents the rate at which the error budget would reach exactly 0 by the end of the measurement window if consumed at a steady pace. Then, the actual consumption rate is measured against this reference to see how many times faster the error budget is being depleted. This multiple is recorded as the burn rate. You configure the system to trigger an alert when the burn rate surpasses a predetermined threshold. For more information on multiwindow, multi-burn-rate alerts, see Chapter 5, “Alerting on SLOs” in The Site Reliability Workbook. https://www.oreilly.co.jp/books/9784873119137/ The English version has been published on the web: https://sre.google/workbook/alerting-on-slos/#6-multiwindow-multi-burn-rate-alerts To use the multiwindow, multi-burn-rate alerts approach, you need to set up an alert rule that specifies multiple windows and burn rates—i.e., multiple different parameters—for a single SLI/SLO definition. As a result, a challenge has been that the number of alert rules grows to the point where they become difficult to manage. What we will do In this article, we will use an open source tool called Sloth to solve this issue. https://sloth.dev/ By using Sloth, you can define SLI/SLO specifications with simple descriptions, which then generate Prometheus recording rules and alert rule definition files—tasks that would otherwise be complex and prone to errors. At KINTO Technologies, we adopt a configuration like the one in the figure below. Sloth is able to generate multiwindow, multi-burn rate alert rules by default. Therefore, in this article, I will show you how to set up multiwindow, multi-burn rate alert rules using Sloth. Generating alert rules :::message Sloth allows you to input SLI/SLO specification files that adhere to OpenSLO standard However, these currently do not appear to support generating Prometheus alert rules, so we have opted to use Sloth’s own SLI/SLO specification format instead. ::: The following simple SLI/SLO specification is expressed in a YAML file based on the Sloth standards. Category SLI SLO Availability The percentage of successful requests measured by the application over a 30-day period. Consider any HTTP status outside the ranges 500–599 and 429 as successful. Consolidate and measure all request paths except actuator. 99.5% version: "prometheus/v1" service: "KINTO" labels: owner: "KINTO Technologies Corporation" repo: "slo-maintenance" tier: "2" slos: # We allow failing (5xx and 429) 5 request every 1000 requests (99.5%). - name: "kinto-requests-availability" objective: 99.5 description: "Common SLO based on availability for HTTP request responses." sli: events: error_query: sum(rate(http_server_requests_seconds_count{application="kinto",status=~"(5..|429)",uri!~".*actuator.*"}[{{.window}}])) total_query: sum(rate(http_server_requests_seconds_count{application="kinto",uri!~".*actuator.*"}[{{.window}}])) alerting: name: KINTOHighErrorRate labels: category: "availability" annotations: # Overwrite default Sloth SLO alert summary on ticket and page alerts. summary: "High error rate on 'KINTO SERVICE' requests responses" page_alert: labels: severity: "critical" ticket_alert: labels: severity: "warning" http_server_requests_seconds_count is a metric for when using Spring Boot. With this file saved in the ./source/ directory, run the following command: docker pull ghcr.io/slok/sloth docker run -v /$(pwd):/home ghcr.io/slok/sloth generate -i /home/source/slo_spec.yml > slo_generated_rules.yml Running the above command generates the following files in the current directory. The generated files can be uploaded to Prometheus as is. :::details slo_generate_rules.yml --- # Code generated by Sloth (a9d9dc42fb66372fb1bd2c69ca354da4ace51b65): https://github.com/slok/sloth. # DO NOT EDIT. groups: - name: sloth-slo-sli-recordings-KINTO-kinto-requests-availability rules: - record: slo:sli_error:ratio_rate5m expr: | (sum(rate(http_server_requests_seconds_count{application="kinto",status=~"(5..|429)",uri!~".*actuator.*"}[5m]))) / (sum(rate(http_server_requests_seconds_count{application="kinto",uri!~".*actuator.*"}[5m]))) labels: owner: KINTO Technologies Corporation repo: slo-maintenance sloth_id: KINTO-kinto-requests-availability sloth_service: KINTO sloth_slo: kinto-requests-availability sloth_window: 5m tier: "2" - record: slo:sli_error:ratio_rate30m expr: | (sum(rate(http_server_requests_seconds_count{application="kinto",status=~"(5..|429)",uri!~".*actuator.*"}[30m]))) / (sum(rate(http_server_requests_seconds_count{application="kinto",uri!~".*actuator.*"}[30m]))) labels: owner: KINTO Technologies Corporation repo: slo-maintenance sloth_id: KINTO-kinto-requests-availability sloth_service: KINTO sloth_slo: kinto-requests-availability sloth_window: 30m tier: "2" - record: slo:sli_error:ratio_rate1h expr: | (sum(rate(http_server_requests_seconds_count{application="kinto",status=~"(5..|429)",uri!~".*actuator.*"}[1h]))) / (sum(rate(http_server_requests_seconds_count{application="kinto",uri!~".*actuator.*"}[1h]))) labels: owner: KINTO Technologies Corporation repo: slo-maintenance sloth_id: KINTO-kinto-requests-availability sloth_service: KINTO sloth_slo: kinto-requests-availability sloth_window: 1h tier: "2" - record: slo:sli_error:ratio_rate2h expr: | (sum(rate(http_server_requests_seconds_count{application="kinto",status=~"(5..|429)",uri!~".*actuator.*"}[2h]))) / (sum(rate(http_server_requests_seconds_count{application="kinto",uri!~".*actuator.*"}[2h]))) labels: owner: KINTO Technologies Corporation repo: slo-maintenance sloth_id: KINTO-kinto-requests-availability sloth_service: KINTO sloth_slo: kinto-requests-availability sloth_window: 2h tier: "2" - record: slo:sli_error:ratio_rate6h expr: | (sum(rate(http_server_requests_seconds_count{application="kinto",status=~"(5..|429)",uri!~".*actuator.*"}[6h]))) / (sum(rate(http_server_requests_seconds_count{application="kinto",uri!~".*actuator.*"}[6h]))) labels: owner: KINTO Technologies Corporation repo: slo-maintenance sloth_id: KINTO-kinto-requests-availability sloth_service: KINTO sloth_slo: kinto-requests-availability sloth_window: 6h tier: "2" - record: slo:sli_error:ratio_rate1d expr: | (sum(rate(http_server_requests_seconds_count{application="kinto",status=~"(5..|429)",uri!~".*actuator.*"}[1d]))) / (sum(rate(http_server_requests_seconds_count{application="kinto",uri!~".*actuator.*"}[1d]))) labels: owner: KINTO Technologies Corporation repo: slo-maintenance sloth_id: KINTO-kinto-requests-availability sloth_service: KINTO sloth_slo: kinto-requests-availability sloth_window: 1d tier: "2" - record: slo:sli_error:ratio_rate3d expr: | (sum(rate(http_server_requests_seconds_count{application="kinto",status=~"(5..|429)",uri!~".*actuator.*"}[3d]))) / (sum(rate(http_server_requests_seconds_count{application="kinto",uri!~".*actuator.*"}[3d]))) labels: owner: KINTO Technologies Corporation repo: slo-maintenance sloth_id: KINTO-kinto-requests-availability sloth_service: KINTO sloth_slo: kinto-requests-availability sloth_window: 3d tier: "2" - record: slo:sli_error:ratio_rate30d expr: | sum_over_time(slo:sli_error:ratio_rate5m{sloth_id="KINTO-kinto-requests-availability", sloth_service="KINTO", sloth_slo="kinto-requests-availability"}[30d]) / ignoring (sloth_window) count_over_time(slo:sli_error:ratio_rate5m{sloth_id="KINTO-kinto-requests-availability", sloth_service="KINTO", sloth_slo="kinto-requests-availability"}[30d]) labels: owner: KINTO Technologies Corporation repo: slo-maintenance sloth_id: KINTO-kinto-requests-availability sloth_service: KINTO sloth_slo: kinto-requests-availability sloth_window: 30d tier: "2" - name: sloth-slo-meta-recordings-KINTO-kinto-requests-availability rules: - record: slo:objective:ratio expr: vector(0.995) labels: owner: KINTO Technologies Corporation repo: slo-maintenance sloth_id: KINTO-kinto-requests-availability sloth_service: KINTO sloth_slo: kinto-requests-availability tier: "2" - record: slo:error_budget:ratio expr: vector(1-0.995) labels: owner: KINTO Technologies Corporation repo: slo-maintenance sloth_id: KINTO-kinto-requests-availability sloth_service: KINTO sloth_slo: kinto-requests-availability tier: "2" - record: slo:time_period:days expr: vector(30) labels: owner: KINTO Technologies Corporation repo: slo-maintenance sloth_id: KINTO-kinto-requests-availability sloth_service: KINTO sloth_slo: kinto-requests-availability tier: "2" - record: slo:current_burn_rate:ratio expr: | slo:sli_error:ratio_rate5m{sloth_id="KINTO-kinto-requests-availability", sloth_service="KINTO", sloth_slo="kinto-requests-availability"} / on(sloth_id, sloth_slo, sloth_service) group_left slo:error_budget:ratio{sloth_id="KINTO-kinto-requests-availability", sloth_service="KINTO", sloth_slo="kinto-requests-availability"} labels: owner: KINTO Technologies Corporation repo: slo-maintenance sloth_id: KINTO-kinto-requests-availability sloth_service: KINTO sloth_slo: kinto-requests-availability tier: "2" - record: slo:period_burn_rate:ratio expr: | slo:sli_error:ratio_rate30d{sloth_id="KINTO-kinto-requests-availability", sloth_service="KINTO", sloth_slo="kinto-requests-availability"} / on(sloth_id, sloth_slo, sloth_service) group_left slo:error_budget:ratio{sloth_id="KINTO-kinto-requests-availability", sloth_service="KINTO", sloth_slo="kinto-requests-availability"} labels: owner: KINTO Technologies Corporation repo: slo-maintenance sloth_id: KINTO-kinto-requests-availability sloth_service: KINTO sloth_slo: kinto-requests-availability tier: "2" - record: slo:period_error_budget_remaining:ratio expr: 1 - slo:period_burn_rate:ratio{sloth_id="KINTO-kinto-requests-availability", sloth_service="KINTO", sloth_slo="kinto-requests-availability"} labels: owner: KINTO Technologies Corporation repo: slo-maintenance sloth_id: KINTO-kinto-requests-availability sloth_service: KINTO sloth_slo: kinto-requests-availability tier: "2" - record: sloth_slo_info expr: vector(1) labels: owner: KINTO Technologies Corporation repo: slo-maintenance sloth_id: KINTO-kinto-requests-availability sloth_mode: cli-gen-prom sloth_objective: "99.5" sloth_service: KINTO sloth_slo: kinto-requests-availability sloth_spec: prometheus/v1 sloth_version: a9d9dc42fb66372fb1bd2c69ca354da4ace51b65 tier: "2" - name: sloth-slo-alerts-KINTO-kinto-requests-availability rules: - alert: KINTOHighErrorRate expr: | ( max(slo:sli_error:ratio_rate5m{sloth_id="KINTO-kinto-requests-availability", sloth_service="KINTO", sloth_slo="kinto-requests-availability"} > (14.4 * 0.005)) without (sloth_window) and max(slo:sli_error:ratio_rate1h{sloth_id="KINTO-kinto-requests-availability", sloth_service="KINTO", sloth_slo="kinto-requests-availability"} > (14.4 * 0.005)) without (sloth_window) ) or ( max(slo:sli_error:ratio_rate30m{sloth_id="KINTO-kinto-requests-availability", sloth_service="KINTO", sloth_slo="kinto-requests-availability"} > (6 * 0.005)) without (sloth_window) and max(slo:sli_error:ratio_rate6h{sloth_id="KINTO-kinto-requests-availability", sloth_service="KINTO", sloth_slo="kinto-requests-availability"} > (6 * 0.005)) without (sloth_window) ) labels: category: availability severity: critical sloth_severity: page annotations: summary: High error rate on 'KINTO SERVICE' requests responses title: (page) {{$labels.sloth_service}} {{$labels.sloth_slo}} SLO error budget burn rate is too fast. - alert: KINTOHighErrorRate expr: | ( max(slo:sli_error:ratio_rate2h{sloth_id="KINTO-kinto-requests-availability", sloth_service="KINTO", sloth_slo="kinto-requests-availability"} > (3 * 0.005)) without (sloth_window) and max(slo:sli_error:ratio_rate1d{sloth_id="KINTO-kinto-requests-availability", sloth_service="KINTO", sloth_slo="kinto-requests-availability"} > (3 * 0.005)) without (sloth_window) ) or ( max(slo:sli_error:ratio_rate6h{sloth_id="KINTO-kinto-requests-availability", sloth_service="KINTO", sloth_slo="kinto-requests-availability"} > (1 * 0.005)) without (sloth_window) and max(slo:sli_error:ratio_rate3d{sloth_id="KINTO-kinto-requests-availability", sloth_service="KINTO", sloth_slo="kinto-requests-availability"} > (1 * 0.005)) without (sloth_window) ) labels: category: availability severity: warning sloth_severity: ticket annotations: summary: High error rate on 'KINTO SERVICE' requests responses title: (ticket) {{$labels.sloth_service}} {{$labels.sloth_slo}} SLO error budget burn rate is too fast. ::: This time, we generated a simple example, but in practice, you’ll likely define multiple, more complex SLI/SLO specifications. Without Sloth, you would need to manage lengthy, generated-like code directly. Sloth significantly reduces the hassle involved in this process Configuration procedure At KINTO Technologies, we use Amazon Managed Service for Prometheus. This enables you to upload the generated files via the AWS Managed Console. For more information on how to use Amazon Managed Service for Prometheus, please refer to the official documentation: https://docs.aws.amazon.com/ja_jp/prometheus/latest/userguide/AMP-rules-upload.html Alternatively, you can run the AWS CLI from a workflow. Here is an example using GitHub Actions. name: SLO set up on: workflow_dispatch: jobs: setup-slo: name: Set up SLOs runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Set AWS Credentials to EnvParam(Common) uses: aws-actions/configure-aws-credentials@v2 with: aws-access-key-id: ${{ AWS access key to be used }} aws-secret-access-key: ${{ AWS secret access key to be used }} aws-region: ${{ AWS region to be used }} ## Generate a configuration file from the definition file - name: download and setup generator binary run: | ## Please check the latest release situation as appropriate. wget https://github.com/slok/sloth/releases/download/vX.XX.X/sloth-linux-amd64 chmod +x sloth-linux-amd64 ./sloth-linux-amd64 validate -i ./services/kinto/source/slo_spec.yml ./sloth-linux-amd64 generate -i ./services/kinto/source/slo_spec.yml -o ./services/kinto/configuration.yml ## Upload the configuration file to Prometheus - name: upload configuration file to APM run: | base64 ./services/kinto/configuration.yml > ./services/kinto/configuration_base_64.yml aws amp create-rule-groups-namespace \ --data file://./services/kinto/configuration_base_64.yml \ --name slo-rules \ --workspace-id ${{ ID of the AMP workspace to be used }} ¥ --region ${{ AWS region to be used }} Visual representation Once the rule file has been uploaded to Prometheus, the next step is to represent the data visually. We use Grafana. Grafana Labs provides a dashboard template for Sloth, allowing you to visually represent the generated rules by simply importing it. https://sloth.dev/introduction/dashboards/ Procedure for configuring alerts Multiwindow, multi-burn rate alerts are sent from Prometheus. Create an Alert manager configuration file and upload it to Prometheus. https://docs.aws.amazon.com/ja_jp/prometheus/latest/userguide/AMP-alertmanager-config.html :::message With “Amazon Managed Service for Prometheus,” only notifications to Amazon SNS are currently supported. (We are hoping this will improve in the future!) Consequently, we create SNS topics in advance, then specify the topics’ ARNs in the configuration file. ::: At KINTO Technologies, we create a configuration file similar to the following to separate the routing of Critical and Warning alerts. The SNS attributes sent include the alert type information. alertmanager_config: | # The root route on which each incoming alert enters. route: # A default receiver receiver: warning_alert routes: - receiver: critical_alert matchers: - severity="critical" - receiver: warning_alert matchers: - severity="warning" # Amazon Managed Service for Prometheus, # The only receiver currently supported is Amazon Simple Notification Service (Amazon SNS). # If you have other types of receivers listed in the configuration, it will be rejected. # Expect future revisions. https://docs.aws.amazon.com/ja_jp/prometheus/latest/userguide/AMP-alertmanager-config.html receivers: - name: critical_alert sns_configs: - topic_arn: arn:aws:sns:{AWS region}:{AWS account}:prometheus-alertmanager sigv4: region: {AWS region} attributes: severity: critical slack_api_url: '<your slack api url>' slack_channel: '#<your channel name>' - name: warning_alert sns_configs: - topic_arn: arn:aws:sns:{AWS region}:{AWS account}:prometheus-alertmanager sigv4: region: {AWS region} attributes: severity: warning slack_api_url: '<your slack api url>' slack_channel: '#<your channel name>' Also, we subscribe to the SNS topics on AWS Lambda. Lambda uses the attributes from the triggered notification to dynamically route the alerts to the appropriate Slack channels. In practice, we will customize this more, for example, by making it hit the PagerDuty API if a Critical alert is triggered. # # this script based on https://aws.amazon.com/jp/premiumsupport/knowledge-center/sns-lambda-webhooks-chime-slack-teams/ # import urllib3 import json http = urllib3.PoolManager() def lambda_handler(event, context): print({"severity": event["Records"][0]["Sns"]["MessageAttributes"]["severity"]["Value"]}) url = event["Records"][0]["Sns"]["MessageAttributes"]["slack_api_url"]["Value"] msg = { "channel": event["Records"][0]["Sns"]["MessageAttributes"]["slack_channel"]["Value"], "username": "PROMETHEUS_ALERTMANAGER", "text": event["Records"][0]["Sns"]["Message"], "icon_emoji": "", } encoded_msg = json.dumps(msg).encode("utf-8") resp = http.request("POST", url, body=encoded_msg) print( { "message": event["Records"][0]["Sns"]["Message"], "status_code": resp.status, "response": resp.data, } ) Neat approaches we came up with SLOs as Code With Sloth, you can encode SLI/SLO specifications in the YAML file format. Since this is code, the version can be managed using tools like Git. In addition, you can use hosting tools such as GitHub to make it easier to review. As long as the SLI/SLO specifications are compatible with Prometheus (expressible using PromQL), they can be applied not only to applications but also to metrics monitoring for load balancers and external monitoring services. So, it is fair to say that Sloth has a wide scope of application. In the KINTO Technologies SRE Team, we consolidate all the YAML-format SLI/SLO specifications into a single GitHub repository. The SRE team provides a template in the repository, and the development teams define SLI/SLO specifications based on that, commit them, then create a pull request. The SRE team then reviews the pull request. This procedure makes it possible to understand the SLI/SLO specifications and reflect them in the monitoring smoothly. This approach helps reduce management costs and allows SLOs for any product to be more easily referenced across KINTO Technologies’ entire development organization. The service level of a dependency has a significant impact on the service level of its own service. Since KINTO Technologies' services rely on each other, sharing service levels across organizational boundaries helps maintain the service levels of individual services more effectively. Latency SLI With "slow being the new down," we need to monitor response times in addition to tracking 5xx errors. We will represent the following simple SLI/SLO specification in a YAML file that follows the Sloth standards. Category SLI SLO Latency Among the successful requests measured by the application, consolidate and measure all request paths except actuator. Among 30 days’ worth of requests, the percentage of those that return a response in less than 3,000 milliseconds. 99% version: "prometheus/v1" service: "KINTO" labels: owner: "KINTO Technologies Corporation" repo: "slo-maintenance" tier: "2" slos: ... # We allow failing (less than 3000ms) and (5xx and 429) 990 request every 1000 requests (99%). - name: "kinto-requests-latency-99percent-3000ms" objective: 99 description: "Common SLO based on latency for HTTP request responses." sli: raw: # Get the average satisfaction ratio and rest 1 (max good) to get the error ratio. error_ratio_query: | 1 - ( sum(rate(http_server_requests_seconds_bucket{le="3",application="kinto",status!~"(5..|429)",uri!~".*actuator.*"}[{{.window}}])) / sum(rate(http_server_requests_seconds_count{application="kinto",status!~"(5..|429)",uri!~".*actuator.*"}[{{.window}}])) ) alerting: name: KINTOHighErrorRate labels: category: "latency" annotations: summary: "High error rate on 'kinto service' requests responses" page_alert: labels: severity: "critical" ticket_alert: labels: severity: "warning" To get the data in histogram form, add the following settings to application.yml: management: ... metrics: tags: application: ${spring.application.name} distribution: percentiles-histogram: http: server: requests: true slo: http: server: requests: 100ms, 500ms, 3000ms Adding the settings below management.metrics.distribution configures it to give the metrics via the histogram-type percentages-histogram rather than the summary-type percentiles . The reason for this is that percentiles aggregate response times for a specific percentile only on a per-task basis and for the last minute, meaning they cannot be aggregated across multiple tasks or over an extended period, like 30 days. On the other hand, percentiles-histogram stores the number of requests with response times within the threshold as a value, so they can be aggregated for an arbitrary range across multiple tasks using PromQL. This approach allows us to define the latency SLI specification as the percentage of total requests that meet the specified criteria. Discussion Recommendation for a settable SLO: At least 94% The Site Reliability Workbook provides recommended window and burn rate thresholds for detecting error budget consumption. https://sre.google/workbook/alerting-on-slos/#6-multiwindow-multi-burn-rate-alerts By default, Sloth supports several of the burn rates given in the Site Reliability Workbook. Consequently, the maximum burn rate threshold for which alerts can occur is 14.4. In this case, for example, if the SLO is 93%, the error budget will be 7%. If we calculate the error rate for a burn rate of 14.4, we get 14.4 * 7 = 100.8. Basically, the error rate is calculated using “error requests divided by all requests,” so it cannot exceed 100. This means that if you set the SLO to 93%, there is zero probability that an alert will be fired that reports a burn rate above 14.4. Consequently, we recommend setting an SLO of at least 94%. Conclusion In my previous articles, I’ve shared the initiatives we’re working on within the KINTO Technologies SRE Team. What did you think? While the organization as a whole isn’t required to manage service levels with extreme rigor, we’re pleased to have the flexibility to easily test useful alerts using the techniques described here. The Platform Group is actively looking for new team members to join us If you are interested and would like to hear more, please feel free to contact us! @ card
アバター
Introduction Hello. I appreciate you taking the time to read this! My name is Nakamoto, and I work on developing the frontend for KINTO FACTORY ('FACTORY' in this article), a service that enables users to upgrade their current vehicles. In this article, I would like to introduce Strapi , an open-source tool which we implemented to create and manage the content for the newly launched FACTORY Magazine . What is Strapi? Strapi is a self-hosted, headless CMS that, unlike other SaaS-based CMS options, allows users to set up and manage their own servers, databases, and environments. (Strapi also offers Strapi Cloud , which provides a cloud environment.) At KINTO, we’ve worked with several SaaS CMS platforms to publish columns and articles. However, encountered limitations balancing operational efficiency with costs, along with the limited flexibility to add new features or customize existing ones. This led us to explore OSS CMS tools with the aim of setting up a self-hosted solution. WordPress is likely the most recognized OSS headless CMS. However, while researching other tools that are recently gaining popularity. This is how we came across Strapi. When evaluating open-source tools, we prioritized the following: Usability : A user-friendly management UI, accessible to both developers and content managers Community Support : Backed by a large community and extensive documentation Plugin Variety : A diverse range of plugins for easily extending functionality Scalability : Built on Node.js for high performance and scalability If certain functions were missing or didn’t align with our needs, we could easily modify or create plugins using our JavaScript expertise. The low implementation cost was also a key factor in our decision. Architecture and Deployment Mechanism Strapi, like FACTORY's e-commerce site, is hosted on AWS and consists of a simple architecture using ECS and Aurora. Strapi, as a CMS, operates independently from FACTORY's web application, primarily supporting internal teams like business units in writing and publishing articles. When publishing an article, it triggers a build of the web application, during which article information is retrieved from Strapi's API and directly embedded into the page. This setup means that users don’t directly interact with Strapi, creating a closed network for the CMS environment and eliminating unnecessary external access. Customization Case Study Let’s take a look at some of the customizations we implemented during the setup process. Creating new plugins In FACTORY magazine, there is a page titled User's Voice that features interviews with customers who have purchased and installed items from FACTORY. For these articles, it’s essential to link the car models or item names associated with each installation. Users enter details like "car model name (e.g., RAV4)" or "item name (e.g., Headlamp Design Upgrade)" in the standard input field. Vehicle Information Entry However, allowing free text entry can lead to inconsistencies in naming. To make it easier to search articles by car model or item—similar to a blog—it's more effective to link item IDs and other data stored within FACTORY to the articles. We addressed this by creating dropdown lists for these input fields, utilizing the BFF (Backend for Frontend) services already in place for the e-commerce site. Vehicle Selection Item Selection With this approach, the custom plugins enable precise linking of item and car model information. We also customized the images for an intuitive display, simplifying selection for the author. Leveraging the existing BFF from the e-commerce site highlights another advantage of a self-hosted CMS. Unlike SaaS solutions, this setup offers greater flexibility for customizations while minimizing security risks, as mentioned earlier. :::message An article on creating a custom API has already been published, so I encourage you to check that out as well! Implementing a custom API in Strapi ::: Customizing existing plugins As another example of customization, we introduced this existing plugin, tagsinput , to meet the requirement of searching for similar articles by linking tags to them. However, this plugin saves the entered tags in the database as an associative array, like this: [{ name: tag1 }, { name: tag2 }] . This made the search logic complicated when creating an API to search by tags. So, to simplify the search, I made slight customizations to the plugin, changing it to store the entered tags as an array of strings [tag1, tag2] . https://github.com/canopas/strapi-plugin-tagsinput/blob/1.0.6/admin/src/components/Input/index.js#L29-L36 @@ -26,8 +26,7 @@ const { formatMessage } = useIntl(); const [tags, setTags] = useState(() => { try { - const values = JSON.parse(value); - return values.map((value) => value.name); + return JSON.parse(value) || []; } catch (e) { return []; } https://github.com/canopas/strapi-plugin-tagsinput/blob/1.0.6/admin/src/components/Input/index.js#L64-L70 @@ -38,7 +37,7 @@ onChange({ target: { name, - value: JSON.stringify(tags.map((tag) => ({ name: tag }))), + value: JSON.stringify(tags), type: attribute.type, }, }); This approach makes it easy to tweak existing plugins, tailoring them to better suit our specific needs. One of the key customizations we've implemented is adding video tags to CKEditor, the rich text editor Strapi uses for article postings. Since this requires a more in-depth explanation, we’ll cover it in a separate article. Conclusion FACTORY was the first among KINTO's services to introduce Strapi, an open-source CMS tool. Teams using Strapi for writing and posting articles have shared positive feedback, noting it’s "easier to use than SaaS-based CMS services"—a promising sign for the tool’s future. We’re already receiving feature requests, which we plan to address by taking full advantage of Strapi’s customizability. Although we’re in the early stages of operations, I’m excited to gain experience and explore Strapi’s potential, looking beyond article publishing to find innovative uses. I also plan to share FACTORY's approach with other KINTO services using SaaS CMS tools, aiming to foster broader adoption and development across the company.
アバター
Introduction Hello, I'm Ryo, a developer of ID Platform in the Global Development Group. I participated in the OpenID Summit at Shibuya Stream Hall on January 19th 2024, so I am writing this article to share my impressions and the interesting points I found. Due to the COVID-19 pandemic, the summit was held for the first time in four years since the OpenID Summit Tokyo 2020. It was very exciting as many people who are interested in OpenID gathered at the venue on the day of the event. The topic this time was regarding what kind of change digital identity has brought about in the four years surrounding the COVID-19 pandemic, and what kind of development digital identity is likely to take place in the world in the future. Program Flow Impressions OpenID is not a very well known topic compared to other highly versatile technology fields, and I think there are many people who have never heard of it. However, when I arrived at the summit venue, there were many people who were interested in OpenID from many companies. I was a little surprised. Some people were attending for the first time, so in the morning, we were introduced to the history of OpenID's development so far, the future development and outlook for OpenID in the areas of digital identity and electronic money, and details of the translation of materials and human resource development activities being carried out by the OpenID Foundation Japan Working Group. From the content presented in the morning, I deeply felt that the future direction of OpenID development is changing from the area of authentication and authorization to identity verification (digital identity) and electronic payments management. As for the topics presented in the afternoon, each company explained the problems encountered during the implementation and operation stage of OpenID and the overall solutions and countermeasures. However, the afternoon program was held at two different venues, and since I cannot make copies of myself like Naruto , unfortunately I could not attend both at the same time. Impressive presentation Consideration of countermeasures against spoofing attacks when using OpenID Connect Presenter: Junki Yuasa (Nara Institute of Science and Technology, Laboratory for Cyber Resilience). The case presented here is quite rare, but I was surprised that a second-year master's student had delved so deeply into OpenID operation experiments. Although there are several authentication modes for OpenID, security may be low depending on the usage scenario, so I think that in future developments, we should be careful about risky parts in specific cases like this one. Mercari App Access Token Presenter: Nguyen Da (Mercari, Inc. ID Platform Team Software Engineer ). The Mercari App is well-known in Japan as a very popular online second hand marketplace. The ID Platform Engineer explained how Mercari ID faced some difficulties with the old method of operation, and how they worked to make it easier for users to use the service on the mobile app and browser. With that, I learned that although we have achieved many goals in utilizing browser cookies, only Chrome browsers have special specifications, with a cookie validity period of 400 days . Since we launched our ID platform, we have gone through many challenges and efforts regarding UI and UX, but I learned for the first time that the cookie validity period is 400 days . About JWT You may have heard of the name JWT (JSON Web Token), but if you are not familiar with authentication and authorization, you may not have had a chance to learn about the role of JWT or the interrelationship between JWK, JWS, and JWE, which are often mentioned together. So let me give a brief explanation in advance: JWT is a standard for ensuring the reliability of information exchanged on a network. Among them, JWS and JWE are examples of the implementation of the JWT standard. JWS (JSON Web Signature) uses "." as shown in the figure below.It is divided into three parts: ** Header (authentication method), Payload (actual information), Signature (guarantee against tampering)**. JWS has been base64 encoded, so the decoded Payload has all the information disclosed. According to the explanation of JWT, JWK (Json Web Key) is an encryption key that is written in the JWT Header and used to encrypt the hash of the Payload contents to generate a Signature. JWE (JSON Web Encryption) is a JWT that can protect the safety and integrity of the JWS mentioned above. So, it was divided into five parts using ".", with the second being a Payload-specific code that serves as the decryption key. Only the owner of the encryption key can decrypt the contents of the Payload. Source SD-JWT At this OpenID Summit, Italy's track record of electronic money implementation and operation was introduced. Here, they explained a new concept for me called SD-JWT . This was the first time I had heard of it, so I looked it up myself after the summit ended. From here on, we will finally delve into the main subject of this article. I would like to briefly explain what I found about SD-JWT. Selective Disclosure JWT (SD-JWT) is, as the name suggests, a JWT in which particular information is only disclosed to selected parties. I will first explain the background to the design of SD-JWT, when JWS and JWE already exist in the world. There are currently two types of Payload disclosures: Full Disclosure: Anyone can parse the JWS using base64 and see everything in the payload. Complete confidentiality: In the case of JWE, the contents of the JWE payload cannot be seen by anyone other than the owner who has the decryption key. However, there is no solution for cases where you only want to disclose some information. That is why SD-JWT was born. For example, when an electronic wallet owner purchases a product worth 100,000 yen, they do not need to see the general attribute information (birthday, address, phone number, etc.) that the product seller uses to authenticate the buyer, and they only want to see the buyer's electronic wallet balance. As a buyer, you can make a purchase by disclosing only essential information such as your balance and ID without disclosing all your personal information. This alone may not be enough, but it is effective to prevent leakage of personal information to a certain extent by disclosing only the necessary information of the owner from the JWS to the business operator. How to implement SD-JWT Let's start with the traditional ID token generation procedure. First, display the personal information of a certain user A in JSON format as shown below. { "sub": "cd48414c-381a-4b50-a935-858c1012daf0", "given_name": "jun", "family_name": "liang", "email": "jun.liang@example.com", "phone_number": "+81-080-123-4567", "address": { "street_address": "123-456", "locality": "shibuya", "region": "Tokyo", "country": "JP" }, "birthdate": "1989-01-01" } The issuer then assigns an SD-JWT Salt (random value) to each attribute information. { "sd_release": { "sub": "[\"2GLC42sKQveCfGfryNRN9c\", \"cd48414c-381a-4b50-a935-858c1012daf0\"]", "given_name": "[\"eluV5Og3gSNII8EYnsxC_B\", \"jun\"]", "family_name": "[\"6Ij7tM-a5iVPGboS5tmvEA\", \"liang\"]", "email": "[\"eI8ZWm9QnKPpNPeNen3dhQ\", \"jun.liang@example.com\"]", "phone_number": "[\"Qg_O64zqAxe412a108iroA\", \"+81-080-123-4567\"]", "address": "[\"AJx-095VPrpTtM4QMOqROA\", {\"street_address\": \"123-456\", \"locality\": \"shibuya\", \"region\": \"Tokyo\", \"country\": \"JP\"}]", "birthdate": "[\"Pc33CK2LchcU_lHggv_ufQ\", \"1989-01-01\"]" } } The attribute information of "sd_release" is calculated using the hash function specified in "_sd_alg" and stored in "_sd" below, and a new payload can be created by adding the issuer's signing key (cnf), validity period (ext), and issuance time (iat). The latest token is issued based on the payload, and the SD-JWT has been created. { "kid": "tLD9eT6t2cvfFbpgL0o5j/OooTotmvRIw9kGXREjC7U=", "alg": "RS256" }. { "_sd": [ "5nXy0Z3QiEba1V1lJzeKhAOGQXFlKLIWCLlhf_O-cmo", "9gZhHAhV7LZnOFZq_q7Fh8rzdqrrNM-hRWsVOlW3nuw", "S-JPBSkvqliFv1__thuXt3IzX5B_ZXm4W2qs4BoNFrA", "bviw7pWAkbzI078ZNVa_eMZvk0tdPa5w2o9R3Zycjo4", "o-LBCDrFF6tC9ew1vAlUmw6Y30CHZF5jOUFhpx5mogI", "pzkHIM9sv7oZH6YKDsRqNgFGLpEKIj3c5G6UKaTsAjQ", "rnAzCT6DTy4TsX9QCDv2wwAE4Ze20uRigtVNQkA52X0" ], "iss": "https://example.com/issuer", "iat": 1706075413, "exp": 1735689661, "_sd_alg": "sha-256", "cnf": { "jwk": { "kty": "EC", "crv": "P-256", "x": "SVqB4JcUD6lsfvqMr-OKUNUphdNn64Eay60978ZlL74", "y": "lf0u0pMj4lGAzZix5u4Cm5CMQIgMNpkwy163wtKYVKI", "d": "0g5vAEKzugrXaRbgKG0Tj2qJ5lMP4Bezds1_sTybkfk" } } }. { Signature The issuer calculates the Payload signature with the public key and places it here, ensuring that the contents of the Payload are not tampered with } The order of the attributes and hash values ​​of "sd_release" and "_sd" does not need to be maintained. How to use SD-JWT The issuer sends the SD-JWT and "sd_release" together to the owner. Depending on the usage situation, the owner can submit the attribute information they want to disclose and the SD-JWT at the same time, allowing authentication while maintaining safety and integrity. "email": "[\"eI8ZWm9QnKPpNPeNen3dhQ\", \"jun.liang@example.com\"]", If you only want to disclose your e-mail address, you will need to submit the above part together with the SD-JWT. The verifier can check the accuracy of the email by checking the following two points  The result of calculating the e-mail portion with the hash function matches "5nXy0Z3QiEba1V1lJzeKhAOGQXFlKLIWCLlhf_O-CMO" in the "_sd " list The signature recalculation result of the Payload matches the signature in SD-JWT (the Payload has not been tampered with). Summary By participating in this summit, I was able to understand the history and future development of OpenID. In addition, I learned about SD-JWT, which is a different format from the JWT that the ID team has used so far. There were many interesting discussions, so I recommend participating even if you are not usually involved in the field of ID. I am looking forward to being able to join as a speaker one day representing KINTO Technologies in the future. Reference OpenID Summit Tokyo 2024
アバター
Introduction Hello. I am Nakaguchi from KINTO Technologies’ Mobile App Development Group. I am the team leader of the iOS team for KINTO Easy Application app. As part of our team building efforts, we conducted a 180-degree feedback session, and I would like to share the details of this initiative. If you’re interested, please check out this article on our team’s retrospective , which was another initiative carried out by our team. Background Recently, a group of volunteers within our company organized a group reading session "GitLab ni manabu sekai saisentan no remote soshiki no tsukurikata: Dokumento no katsuyo de office nashi demo saidai no seika o dasu global kigyo no shikumi" . You can learn more about this group reading session in Learning from GitLab: How to Create the World's Most Advanced Remote Organization . This group reading session was truly inspiring to me, and I wanted to bring some of the insights back to my team. Among the various topics we discussed, I was particularly interested in the concept of 360-degree feedback. 360-degree feedback is an evaluation method where an employee receives feedback from multiple perspectives, including colleagues, superiors, and subordinates. In general, feedback tends to be given from superiors to subordinates. Personally, I’ve always believed that feedback between colleagues, or from co-workers to supervisors, is equally important. This led me to consider implementing 360-degree feedback within our team. However, during the session, we also identified that the 360-degree feedback has potential drawbacks, such as a broad range of respondents and evaluations given to an employee from individuals who may not be directly involved in the employee's work. In counteracting the drawbacks, I learned about the 180-degree feedback method limiting the respondents to just the team members and decided to introduce it to my team. Objectives Through the 180-degree feedback process, I aimed to achieve the following objectives: Identify the discrepancy between the roles the team expects each member to fulfill and others each member perceives for themselves. Recognize individual strengths and weaknesses, and use these insights for future growth Create an opportunity for team members to express their honest opinions about each other. Foster team unity by encouraging team members to be considerate of their peers' feelings through the survey process. I believed that this 180-degree feedback would be a great opportunity for members to understand each other, thereby improving the quality of relationships within the team. Implementation Method Target members: 1 team Leader 6 engineers Survey method We conducted an anonymous survey using Microsoft Forms. Each member answered the following questions for all other members except themselves: Quantitative evaluation (5-point scale): Questions regarding proactivity Questions regarding openness to others Questions regarding decision making attitude Questions regarding perseverance Questions regarding learning from new experiences Questions regarding initiative Qualitative evaluation (free text): Questions regarding self-strengths Questions regarding areas for improvement Words of appreciation to the target members Considerations to proceed the survey To ensure team members positively engage in the process, we, in advance, shared the background and objectives of the feedback at 1-on-1 meetings. To reinforce the anonymity of the survey, we conducted a pre-test survey and shared the results. To avoid a low response rate caused by insufficient time to complete the survey, we allocated specific time for its completion. As feedbacks may include harsh or negative comments about target members, we added a question to allow respondents to express their gratitude to the others at the end of the survey. (This helps to end the survey on a positive note and makes it easier for recipients to accept the feedback constructively.) As a team leader, I wanted to show my team members that I am open-minded to share my feedback results, including self-improvement areas, with them. (However, I did not pressure the team members to share their own feedback results.) Summary of My Feedback Results I have summarized my feedback results below. Strengths High communication skills and approachability: for example, I actively participate in meetings and make an effort to explain things clearly. Eagerness to learn and share knowledge with others: I consistently stay updated on new technology trends and share this information through Slack and meetings. Efforts to improve teamwork: I regularly organize team events to strengthen the bonds among team members. Strong information-gathering skills and quick response: I promptly address issues as they arise and communicate accurate information to relevant parties. Compassionate and reliable leader: I listen to any concerns from team members and appropriately provide them with advice. Areas for Improvement Understanding of product specifications: at times, I proceed with development without fully understanding the specifications of the features. Improvement to organize task tickets more frequently: the prioritization of tickets is sometimes insufficient, which causes important tasks to be delayed. Improvement to clearly explain the background and purpose of implementing initiatives: insufficient communication leaves the purpose of some initiatives unclear, resulting in a lack of understanding among team members. Improvement to take more risks: I may be too cautious when approaching new initiatives, which can result in missed opportunities. My feedback results showed that the areas I consciously focus on were recognized as strengths, which was very pleasing. On the other hand, the feedback on areas for improvement allowed me to recognize not only the aspects I was aware of but also those I had not realized. This feedback will help me grow further. Additionally, the words of appreciation from team members at the end boosted my motivation significantly. I am committed to continuing to contribute to the team even more in the future. Team Strengths and Areas for Improvement Identified Through 180-degree Feedback I also summarized the feedback as an overall team. Team strength Diverse technical skills and leadership : each member has strong technical skills and leadership abilities. Communication skills : communication within the team is active, and information is shared effectively. Problem-solving ability : the team actively tackles technical challenges and complex tasks. Learning motivation : there is a strong commitment to acquiring new knowledge and technical skills, leading to continuous growth. Areas for improvement for the entire team Improving information sharing efficiency : finding more efficient ways to share new technical skills and project-related information. Clarifying roles and responsibilities : further clarifying roles and responsibilities to fully leverage each team member’s abilities. Developing a broader perspective : emphasizing the importance of sharing the project’s overall vision and objectives among the team members. Technical skill sharing and knowledge management : promoting the cross-sharing of technical knowledge within the team to enhance the skills of all members. Additionally, I have summarized each team member’s strengths and roles in the diagram below. Post-Retrospective Survey After conducting the 180-degree feedback, we carried out a survey to gather feedback on the process. (out of 7 responses). Change in evaluation Before: 7.29 -> After: 9.19 NPS (What is NPS?) 57 Would you like to conduct 180-degree feedback on a regular basis (e.g. every six months)? 86% answered “Yes” AI summary of "How satisfied did you feel after you participated?" (free text) The survey results indicate that respondents felt they gained a deeper self-awareness and were able to identify their own challenges. Additionally, through feedback from others, they were able to gain new perspectives that they might not have noticed otherwise. By receiving specific evaluations and areas for improvement, they felt that their future course of action become clearer. These results suggest that the survey was an effective tool for self-reflection. Conclusion In conducting the 180-degree feedback, we encountered a few operational challenges. The average scores were generally high, making it difficult to differentiate between responses. The timing coincided with changes in team members, which resulted in failing to provide feedback to certain team members in an appropriate manner. However, overall, I felt that the feedback process was well-received and satisfying for the team, including myself. As evidenced by the survey results, the team members are highly interested in continuing this process regularly, and I plan to carry on with these efforts in the future. Through this feedback, I was able to identify areas for improvement both for myself and the team as a whole, which I intend to leverage for future growth. I also hope that each team members can also recognize their own areas for improvement and use this as an opportunity for growth.
アバター
🎉 We Are Going to Be a Gold Sponsor for iOSDC Japan 2024 Hello! The Obon season is just around the corner now, right? I am planning to take my children someplace fun this year. Hiroya (@___TRAsh) here. Well, this time, I have some news from the iOS team. KINTO Technologies is going to be a Gold Sponsor for iOSDC Japan 2024 🙌 iOSDC Japan 2024 will be our first participation as exhibitors, and we are going to hold a coding quiz. We are also planning to hand out novelty goods, so please do come visit our booth! So, seeing as it was such a great opportunity, for this article, I interviewed some of our iOS team members. I hope it will be interesting for you to see the diverse range of members we have. 🎤 The interviews This time, I interviewed the KINTO Unlimited iOS team. KINTO Unlimited is an international team with lots of members from overseas. Daily work is primarily conducted in English, though most of them also speak Japanese. T.O. —Please briefly introduce yourself. Hello. I am T.O. I am an iOS engineer at KINTO Technologies. I worked on web front-end development up to my previous job, and had never done any mobile development until I came here, or used GitHub either. I have been with the company for about two and a half years now, during which time I have been involved in a variety of projects and learned about a variety of architectures and modern development methods. —What changed for you after joining this company? Acquiring the skills needed to be an iOS engineer. Also, I moved to Tokyo when I joined KINTO Technologies, so my living environment changed as well. Back then, there were not that many people out and about due to the COVID-19 pandemic, and seeing giant pandas for the first time in Ueno Park left a big impression on me. —What do you like about this company? The atmosphere makes it extremely easy to talk to people. You can consult colleagues about both private and technical matters without hesitation. It is a real blessing to be able to feel free to talk to people whether I am in the office or working remotely. Another good thing is how the benefits package has plenty of things that are really helpful for the parenting generation. —What challenges do you want to take on in the future? In the future, I want to do development using AR and ML. I am working on a project that relates to those fields at the moment, so I want to get into them even further. Also, I want to play games with my children. —A few final words for the blog, please! You can work in a wide range of ways, so it is a very accommodating company to work for 👍 V.V. —Please briefly introduce yourself. I am a Russian who was born and raised in Russia, and have around eight years of experience in iOS development. My hobbies are TRPGs and being a parent. My previous work included creating Windows desktop apps in Russia and ambulance-related services in the United States. Then after coming to Japan, I worked for another company for several years before joining KINTO Technologies. —What changed for you after joining this company? Up to then, I had always worked for small companies, so I always felt uneasy on account of being a parent. On the other hand, KINTO Technologies is a Toyota Group company, so I can work free from worry. The basic salary is not all that different, but there are lots of extra allowances, so I think my pay has gotten quite a lot better. I was able to get a double bed with my bonus. —What do you like about this company? Previously, I had always worked in small teams, so there was no one to turn to for technical advice or mentoring. In KINTO Technologies, however, it is easy to discuss technical matters. Also, I get opportunities to share my own knowledge and there are lots of highly experienced team members around, too, so it is very stimulating. —What challenges do you want to take on in the future? I am interested in developing AR and ML functionalities, so I want to delve into those more deeply. I also want to support my family thoroughly. —A few final words for the blog, please! Come join us here, because you will find lots of opportunity to grow ✋ S.C. —Please give us a brief introduction of yourself. I am a Canadian who was born in Korea then moved to Canada, and is now living in Japan. I came to Japan because I was interested in Japanese movies and culture, and had lots of friends from here. I have been here for around 10 years now. In my previous job, I also worked on back-end development. I am currently the leader of the KINTO Unlimited iOS team. —What changed for you after joining this company? It is a group where lots of mobile engineers get involved regardless of the project, so it is easy to improve skills that have an iOS development focus. I am also glad to be spending more time working in Japanese. —What do you like about this company? In my previous job, the system was huge and I was pretty much doing maintenance work, but in KINTO Technologies, I frequently develop new functionalities, and get lots of opportunities to learn about new technologies. Another great thing is how easy it is to incorporate modern technologies. Also, I love how we actively hold study sessions, giving us lots of opportunities to share our knowledge. —What challenges do you want to take on in the future? I just became a team leader, so I want to improve my leadership skills. I am also interested in how the implementation approaches vary depending on the OS, so I want to expand my knowledge of Android as well. Also, I am currently going to grad school, so I want to graduate from that. —A few final words for the blog, please! I am in a modern development environment where I get to experience a wide variety of things that I never had any chance to before, so I am having a great time. 👍 🚙 Summary Our company is still growing, but there are many products that have only just gotten started, so it is a development environment where it is easy to incorporate modern methods. We get to work in diverse teams, so we get a lot of opportunities to learn about new cultures and technologies. If you want to work in an environment like that, please apply to join us! https://www.kinto-technologies.com/recruit/ And on that note... :::message The Challenge Token is here! #KTCでAfterPartyやります ::: On Monday, September 9, we are going to hold iOSDC JAPAN 2024 AFTER PARTY jointly with TimeTree and WealthNavi, marking our first-ever three-company collaboration event with them. 🥳** The place will be our office in Nihonbashi Muromachi. 🗺️ Please do come along to this, too! https://kinto-technologies.connpass.com/event/327743/ It will probably be extremely hot on the day. Please keep yourselves thoroughly hydrated while you are enjoying the event! We are looking forward to seeing you all at our booth ✋
アバター
Introduction Hello, this is Kuwahara @Osaka Tech Lab from the SCoE Group of KINTO Technologies (hereinafter referred to as KTC). SCoE is an abbreviation for Security Center of Excellence, which may not be a familiar term to you yet. At KTC, we restructured our CCoE team into the SCoE Group this past April. To find out more about the SCoE Group, please check our article “SCoE Group: Leading the Evolution of Cloud Security.” To find out more about the Osaka Tech Lab (KTC’s Kansai base), see “Introducing the Osaka Tech Lab.” In this blog, we will provide a report on our participation in the 28th Shirahama Symposium on Cybercrime, which was held from July 4th to 6th, 2024. First of all, for those who are not familiar with the place "Shirahama" Shirahama refers to a town located in the Nishimuro District of Wakayama Prefecture. It’s a picturesque tourist destination, known for its stunning ocean views, beautiful beaches, and relaxing hot springs. Shirahama is also home to Adventure World, which houses the largest number of pandas in Japan, with four pandas in total. I imagine that the symposium participants not only deepened their knowledge of cybersecurity but also enjoyed the many attractions of Shirahama. Symposium Overview The theme was “How can we address the rapidly changing environment and the increasing complexity of cybercrime?” Although the event was mainly focused on cybercrime, it also included talks and panel discussions on recent general security threats and emerging topics. Based on the philosophy that "cybersecurity cannot be protected by a single organization," this symposium values ​​horizontal connections between companies, government agencies, educational institutions, and other organizations. As a result, I got to hear a lot of ideas and opinions directly from the source, that you could only be gained by being present at the event. The daytime session was held at the Wakayama Prefectural Information Exchange Center Big U , while the evening session shifted to Hotel Seamore , located about 8 km away (I won’t delve into the fact that the daytime venue is technically in neighboring Tanabe City rather than Shirahama Town.) There were numerous interesting talks and presentations at the symposium, but I will highlight two key topics that left a lasting impression on me. For a full program listing, please check out the official website. Key topic 1: Cross-organizational collaboration Networking is regarded as extremely important in these symposiums. The greeting address from the symposium organizing committee leader and several speakers emphasized that no single company or organization can effectively counter threats alone. They highlighted the importance of defending against threats in a holistic, surface-level approach rather than a fragmented, point-based one. This highlights the necessity of collaboration that transcends the boundaries of different industries, as well as the public, private, and academic sectors, to address the complexities and diversities of cyberattacks. It is important to have a common understanding that information sharing between companies, collaboration with government agencies and police agencies, and cooperation with educational institutions are key to achieving stronger security measures. Many police representatives also participated and exchanged views with individuals from private companies. In fact, the first person to invite me to exchange business cards at the symposium was a representative from a police force in a certain prefecture. It was also emphasized that it is difficult for a company to deal with security incidents on its own, and that it is important for each organization to share their experience and know-how and implement effective security measures. During the evening's BOF (Birds of a Feather) event, participants with the same concerns from across organizations and industries gathered and engaged in a lively exchange of opinions. Key topic 2: Generative AI and security There were several talks that covered the trend of generative AI security. The most impressive one was the talk by Fujitsu Laboratories, who provided the latest trends and practical knowledge on generative AI security. The presentation by Fujitsu Laboratories emphasized the importance of addressing security in two ways, protecting systems with AI and ensuring the security of AI itself. Protect with AI : AI as a Cybersecurity Defense AI to prevent security incidents In the field of protection through AI, the existing security coverage is expanding massively as a result of being able to use generative AI as a means of protection. Fujitsu Laboratories introduced their efforts to expand security AI components and create a DevSecOps framework. Protecting AI : Threats and attacks against AI Protecting AI from attacks In the "Protecting AI" section, the risks posed by generative AI were explained in detail. Specific methods of cyber attacks against generative AI and countermeasure approaches were also mentioned. Attacks against AI, including "stealing information" and "deceiving AI," were introduced with concrete examples. The talk was very informative, as it systematically summarized the security aspects that should be considered when building products that utilize generative AI.Brazil For instance, we received useful input for formulating security guidelines for the generative AI development process, as well as examples of guardrails and vulnerability scanners, which will be useful in creating concrete guidelines. TIPS for those attending next year Here are some tips for those who will be attending next year. Securing tickets : The hot spring symposiums (Dogo, Echigo Yuzawa, Atami, Kyushu), including this Shirahama Symposium, are very popular and have platinum ticket prices. Be sure to check the sales start date and secure your tickets early. I also recommend buying lunch (bento) tickets as well. This is due to the limited number of lunch options available in and around the venue. Securing means of transportation : There is a shuttle bus provided by the symposium from Shirahama Station to the venue, but the time is not flexible. It is difficult to get to the venue by public transportation, so you need to be careful about the shuttle bus times. Renting a car is also an option. (I got permission from my company to attend by car, which was really helpful.) Choosing your accommodation : Considering the shuttle bus, it is convenient to choose a place to stay close to the venue (hotel) for the night. The area around the venue for the night is a hot spring resort, so there are many accommodations. Networking : It is advisable to bring plenty of business cards. This is a networking-oriented symposium, so the more you interact with others, the more you'll gain. Summary In cybersecurity, connections across organizations and industries are important. The “straight from the horse’s mouth” ideas, opinions, etc. that you can only get to hear by actually being there are truly invaluable. I am very grateful to the organizing committee members, speakers, sponsors, and all other participants for making the symposium so worthwhile. Next year, how about immersing yourselves in cybersecurity while admiring the beautiful Shirahama sunset as well? Finally The SCoE Group I belong to is seeking new team members to join us. We welcome both those with practical experience in cloud security and those who are interested but have no experience. Please feel free to contact us. For additional details, please check here.
アバター
はじめに こんにちは! KINTO ONE(中古車) のフロントエンド開発をしていますRen.Mです。 この度、KINTOテクノロジーズ株式会社は2024年11月23日(土)に九段下坂上KSビルにて開催される「 JSConf JP 2024 」のプレミアムスポンサーを務めます。 ■JSConf JP 2024について JSConf JP 2024は、日本Node.js協会が主催する日本のJavaScriptフェスティバルです。 日本でのJSConfの開催は今回で5回目となります。 スポンサーブース ブースではJavaScriptにまつわるアンケートを用意しています! 回答いただけた方にはガチャガチャを回していただき、オリジナルノベルティをプレゼントします! 写真はノベルティの一部です! 紙クリップ トートバッグ スポンサーワークショップ ワークショップでは「クルマのサブスクサービスをNext.jsで内製化した経験とその1年後(仮)」について発表します!詳細は以下のリンクからご確認ください! https://jsconf.jp/2024/talk/kinto-technologies/ We Are Hiring! KINTOテクノロジーズでは一緒に働く仲間を探しています! まずは気軽にカジュアル面談からの対応も可能です。少しでも興味のある方は以下のリンクからご応募ください! https://hrmos.co/pages/kinto-technologies/jobs/1955878275904303141 おわりに 興味をお持ちの方はぜひブース、ワークショップまでお越しください! 当日みなさまに会場でお会いできるのを楽しみしています!
アバター
​ Introduction Previously in Part 1, I covered the basics of using variables, adjusting product quantities, and setting subtotals. In this article, I’ll continue from where we left off and explain how to increase the number of items in the cart to two, set a subtotal, establish free shipping conditions, calculate the total amount, and modify the message displayed when free shipping is applied. Let's Make a Shopping Cart Mock-Up Using the Functions of Figma Variables! Part 2 ![](/assets/blog/authors/aoshima/figma2/1.webp =300x) Final layout of the shopping cart [Part 1] What Are Variables Part Creation First Is the Count-Up Function How To Create and Assign Variables Creating a Count-Up Function Subtotal Settings [Part 2] Increasing the Number of Products to Two Subtotal Settings Free Shipping Settings Total Settings Changing the Wording to “Free Shipping” Completed Increasing the Number of Products to Two To have two items in the cart, start by copying the product information from Part 1. Then, update the product images, names, prices, and the quantity displayed in the cart. (Of course, you can also add products using the component variant function for duplication.) ![](/assets/blog/authors/aoshima/figma2/2.webp =300x) Duplicate the original product, and while doing so, change the product name, product image, and price In the following explanation, the original product (SPECIAL ORIGINAL BLEND) will be referred to as "Product A" and the newly copied and added product (BLUE MOUNTAIN BLEND) will be referred to as "Product B". Additionally, just like with Product A, assign the variable “Kosu2” to the quantity of Product B. Refer to Part 1 to set up the count-up function for the plus and minus buttons of Product B. Subtotal Settings This is an application of the subtotal settings made in Part 1. Create and Assign a Variable In Part 2, we assume that the cart contains two products, Product A and Product B, each with a quantity of one. Consequently, we update the value of the local variable ‘Shoukei’ to the total amount of ¥250, calculated as follows: Product A (¥100 x 1) + Product B (¥150 x 1). When you make this change, any numbers on your canvas linked to this variable will automatically update to reflect the new total amount. ![](/assets/blog/authors/aoshima/figma2/3.webp =300x) List of local variables. The red box is the variable assigned to the subtotal number. ![](/assets/blog/authors/aoshima/figma2/4.webp =300x) The subtotal is reflecting the value of the local variable "Shoukei". Give an Action to the Button In Part 1, the subtotal was calculated based on the total amount of Product A only, as shown in the figure below. Select the variable "Shoukei" that you want to update when you click on the plus or minus button for Product A. Then, enter the value of the variable "Kosu1" (which represents the number of items) multiplied by 100 (the unit price of Product A) as the formula to indicate what happens at that time. ![](/assets/blog/authors/aoshima/figma2/5.webp =300x) The subtotal formula set in Part 1 Following this basic principle, we will set the formula to the plus and minus buttons for Products A and B as shown below so that the subtotal will be the sum of the quantities of Products A and B. ![](/assets/blog/authors/aoshima/figma2/6.webp =300x) Settings for the plus button of Product A. The area surrounded by the dotted line is the subtotal setting range. The solid red box shows Product A on the left and Product B on the right. This setting will make the subtotal to be calculated and updated each time you press the plus and minus buttons. By trying the button operation on the preview screen, you can see that the subtotal correctly reflects the total price of the two items. ![](/assets/blog/authors/aoshima/figma2/7.gif =300x) Setting up free shipping Next, I will explain how to set up "Free shipping on purchases over ¥1,000!" The shipping conditions to be set are as follows: If the total is less than ¥1,000, a shipping fee of ¥500 will be added. If the total amount is more than ¥1,000, shipping is free. Create and Assign a Variable First, assign a variable to the number that represents the shipping fee. In this mockup, the initial cart contains one item each, the subtotal is ¥250, and the shipping fee is ¥500. Therefore, we will name the newly created variable “Shipping”, set its value to 500, and assign it to the number next to the shipping fee. ![](/assets/blog/authors/aoshima/figma2/8.webp =300x) The variable "Shipping" is assigned to the number next to the shipping fee Give an Action to the Button Next, set up the button action to calculate the subtotal. The shipping fee will depend on whether the subtotal is less than ¥1,000 or not, so we will use an if statement. If the subtotal is less than ¥1,000, the shipping fee is ¥500, which can be expressed as follows: ![](/assets/blog/authors/aoshima/figma2/9.webp =300x) This formula means that if the subtotal is less than ¥1,000 then the value of "Shipping" should be 500. By the way, you may wonder why we need to enter the same value in "Shipping" because we originally set the value to 500. However, this setting ensures that after the subtotal becomes ¥1,000 or more and the shipping fee is set to ¥0, if the subtotal becomes less than ¥1,000 again, the shipping fee will be reset from ¥0 to ¥500. Next, if the subtotal is ¥1,000 or more, the shipping fee will be ¥0. This can be expressed as shown in the red box below. ![](/assets/blog/authors/aoshima/figma2/10.webp =300x) This means that if the subtotal is ¥1,000 or more, the “Shipping” will be set to 0. By the way, "else" refers to any case that does not meet the conditions set by "if". In this case, "if" refers to when the subtotal is less than ¥1,000, so that means "else" covers when the subtotal is ¥1,000 or more. After applying these settings to each button and previewing it, you will see that the shipping fee changes to "¥0" once the subtotal exceeds ¥1,000. By setting it up this way, the shipping fee will be automatically adjusted according to the subtotal amount. ![](/assets/blog/authors/aoshima/figma2/11.webp =300x) Shipping fee will be ¥0 if the subtotal exceeds ¥1,000 Total Settings Next, let’s move on to setting the total amount. Create and Assign a Variable The variable that indicates the total amount will be abbreviated as "T_Am" for "Total Amount". I apologize for repeating myself, but in this mockup, we assume that the cart contains one of each product, A and B, with a subtotal of ¥250 and a shipping fee of ¥500. Therefore, we will set the initial total amount to 750 for "T_Am". By assigning the variable "T_Am" to the number indicating the total amount, the value "750" will be displayed. ![](/assets/blog/authors/aoshima/figma2/12.webp =300x) The variable is assigned to total amount Give an Action to the Button For the total amount, you will also need to set a conditional branch based on whether the subtotal is less than ¥1,000 or not. The conditions will be the same as the shipping fee settings, so we will add the necessary actions. When you hover your mouse over the if statement, a "+" button will appear with the text "Add nested action." And if you click this button, then a space for additional settings will appear. If you want to add multiple actions under one condition, you can add them in this way: ![](/assets/blog/authors/aoshima/figma2/13.webp =300x) If the subtotal is less than ¥1,000, the total amount will be subtotal + shipping fee, as shown in the red box below. ![](/assets/blog/authors/aoshima/figma2/14.webp =300x) On the other hand, if the subtotal is ¥1,000 or more, the total amount will be the subtotal (with a shipping fee of ¥0) as shown in the red box below. Please note that this is within the "else" statement. ![](/assets/blog/authors/aoshima/figma2/15.webp =300x) After applying the settings to each button and previewing them, you will see that when the subtotal reaches ¥1,000, the shipping fee becomes ¥0 and this change is reflected in the total amount. ![](/assets/blog/authors/aoshima/figma2/16.webp =300x) Modifying the text “free shipping” Finally, we will make some changes to the free shipping text. Here, we would like to hide the "free shipping" text (boxed in red) under the header when shipping becomes free. ![](/assets/blog/authors/aoshima/figma2/17.webp =300x) Create and Assign a Variable Boolean variables are often used when switching between show/hide. A Boolean is a data type used to represent binary conditions such as "true/false" or "yes/no." By the way, in cases like this when we switch between show/hide states, Figma automatically sets "true" as shown and "false" as hidden, so we will use these default settings. First, open Local Variables and press the Create Variable button. Select "Boolean" as the data type. Since this variable is related to the shipping text, we named it "Ship_Txt." In the initial state of the cart, the subtotal will be less than ¥1,000, and the text needs to be displayed, so set the initial value to “true”. ![](/assets/blog/authors/aoshima/figma2/18.webp =300x) A Boolean local variable is created with an initial value of true Next, I will explain the procedure for assigning the variables we have created. First, select the object on the canvas to which you want to assign a variable. Then, take a look at the “Layers” section in the panel on the right side of the screen and right-click the “eye” icon next to Passthrough (Transparency). This icon is not directly shown so it may be difficult to find. Right-clicking will reveal a drop-down menu with a list of variables that can be assigned. From there, select the variable we created earlier. ![](/assets/blog/authors/aoshima/figma2/19.webp =300x) Give an Action to the Button We will also add action settings to set up conditional branching for show/hide text based on the subtotal amount. If the subtotal amount is less than ¥1,000, text will be displayed ("Ship_Txt" = true), so add the text as shown below. ![](/assets/blog/authors/aoshima/figma2/20.webp =300x) Description that changes the Boolean variable "Ship_Txt" to "true" On the other hand, if the subtotal is ¥1,000 or more, the text will be hidden ("Ship_Txt" = false), so add the following code. Please note that this is within the "else" statement. ![](/assets/blog/authors/aoshima/figma2/21.webp =300x) Description that changes the Boolean variable "Ship_Txt" to "false" If you configure the settings for each button and run the preview, you will see how the text automatically hide when the subtotal reaches ¥1,000. ![](/assets/blog/authors/aoshima/figma2/22.webp =300x) I was able to successfully hide the text. However, this leaves extra space, which isn’t ideal from a layout perspective. So I would like to try changing the text itself. Modifying the text “free shipping” ver.2 Create and Assign a Variable Assuming there is or isn’t a shipping fee, we will set up the text to switch between the following two types: If the subtotal is less than ¥1,000: "Only ¥X away from Free shipping!" If the subtotal is ¥1,000 or more: “Free shipping!” I created a component for the free shipping text called "Ship_Txt_Panel" and created two variants. Since we want to toggle between them, we'll enter a Boolean value for each property. ![](/assets/blog/authors/aoshima/figma2/23.webp =300x) First, select the variant above and bring up the property change section in the panel on the right side of the screen. This variant is intended to be displayed initially, so set it to "true". ![](/assets/blog/authors/aoshima/figma2/24.webp =300x) Next, set the property of the variant below to false. ![](/assets/blog/authors/aoshima/figma2/25.webp =300x) Once you have set the properties, place the component instance in the design. While selecting the instance, check the right panel, and you will see a Boolean toggle switch in the Instances section. ![](/assets/blog/authors/aoshima/figma2/26.webp =300x) Selecting an instance placed on the design ![](/assets/blog/authors/aoshima/figma2/27.webp =300x) The toggle switch on the right panel is set to true When you switch this toggle, the content of the instance will change, and you can confirm that the Boolean value has been set. ![](/assets/blog/authors/aoshima/figma2/28.webp =300x) Switch the toggle on the right panel to false ![](/assets/blog/authors/aoshima/figma2/29.webp =300x) The content of the instance will be changed. Furthermore, if you hover the cursor over the toggle switch, an icon for assigning a variable will appear along with the float text. Click on it, and candidate variables will appear. Then, select the Boolean variable “Ship_Txt” from the list and assign it to the instance. ![](/assets/blog/authors/aoshima/figma2/30.webp =300x) Click on the icon in the red box to display the candidate variables. ![](/assets/blog/authors/aoshima/figma2/31.webp =300x) The variable is assigned to the instance. Give an Action to the Button The button action description here is the same as the one which we set earlier to show/hide free shipping, so there is no need to modify it. If you preview it now, you can see that the text changes when the subtotal reaches ¥1,000. ![](/assets/blog/authors/aoshima/figma2/32.webp =300x) Finally, we will set up the amount in the text to change according to the subtotal. Create and Assign a Variable Modify the text within the component, and separate the variable amount part from the rest of the text. ![](/assets/blog/authors/aoshima/figma2/33.webp =300x) The variable amount part is highlighted Next, create a variable to assign to the variable part. Select "Number" as the data type and name it "Extra_Fee". The value of this variable represents the remaining amount needed to reach ¥1,000 for free shipping. Since the subtotal of the cart is ¥250, the difference is ¥1,000 - ¥250 = ¥750, so set the value of "Extra_Fee" to "750." ![](/assets/blog/authors/aoshima/figma2/34.webp =300x) When you assign a variable to the number in the variable part, it will look like this: ![](/assets/blog/authors/aoshima/figma2/35.webp =300x) Give an Action to the Button To make this variable part change according to the increase or decrease in the subtotal, set it as follows. Note that if the subtotal is 1,000 or more, this text will change automatically, so no additional setting is needed. ![](/assets/blog/authors/aoshima/figma2/36.webp =300x) Completed If you preview after setting up each button, you will see that the amount in the text changes when you press the plus (or minus) button, and that the text changes when the subtotal reaches ¥1,000. ![](/assets/blog/authors/aoshima/figma2/37.webp =300x) This concludes the explanation of "Let's create a shopping cart mockup using Figma's variable function!" The features introduced in this article can be applied in various situations, so I hope you find them useful.
アバター