TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

936

Introduction Hello, I am yuki.n. I join the company in January this year! I interviewed those who joined in December 2023 and in January this year about their immediate impressions of the company! I hope this content will be useful for those who are interested in KINTO Technologies, and serve as a reflection for the members who participated in the interview. Hoshino Self-introduction My name is Hoshino, and I joined the company as Deputy General Manager of the Mobility Product Development Division, a newly established division in January. I have been working to create and operate services from a technical perspective. How is your team structured? There are 4 teams: (1) in charge of in-house media, (2) in charge of incubation projects, (3) in charge of tool development for dealers, and (4) in charge of tool planning for dealers. As of February 2024, we have 23 members, mostly software developers, but we also have producers, directors, and designers. We are a team with the capability to run a business holistically. What was your first impression of KINTO Technologies when you joined? Were there any surprises? Yes, indeed! I think it is wonderful that the company provides not only explanations of the divisions, but also a full orientation that includes explanations of the business flow, vision, and medium- to long-term plans, so that mid-career employees can move in the same direction. What is the atmosphere like on site? Despite the wide age range of the members, who are in their 20s to 40s, everyone seems to be in harmony with each other. I initially assumed that many of the members had been with the company for a long time, but a lot of employees had been here for only six months or less. I felt the company's openness to welcoming new people. Work styles are diverse, and remote work seems to be more frequent than in other divisions. I think this team is ideal for those seeking challenges, thanks to the diverse backgrounds of its members. If you are interested, please contact our HR! How did you feel about writing a blog post? I think it is a very good initiative as organizations capable of sharing information will gain a competitive edge in recruitment. [Question from Romie] I feel like hitting roadblocks in the early stages of launching and running a service can pose significant challenges for recovery down the line. What do you think are the crucial aspects and important mindsets one shouldn't overlook when starting out? It is important to understand that services truly begin when customers start using them and their value begins from that moment onward. And that they also require continuous nurturing. Taking the above into consideration and put simply: ‘aim to establish operations that are sustainable over time’. However, since new services may not be fully adopted from launch, I think it is important to discern what are its core requirements to be maintained first and start small. I think that once a service starts, the most important thing is to avoid discontinuation to the users, rather than any troubleshooting. As for its sustainability/ continuity, establishing a strong relationship with the product owner would be beneficial. Choi Self-introduction I'm Choi from the New Vehicle Subscription Development Group within the KINTO ONE Development Division. I joined the company in December. I have been working in frontend and backend development for various web services. How is your team structured? As a content development team, we have nine members including myself. Most of them are frontend engineers. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I felt the system was well-organized, due to the comprehensive orientation provided upon joining the company. I was impressed by the company's blend of characteristics from both a major company and an IT startup. What struck me the most was how experienced engineers were and how they keep exploring and studying new technologies. What is the atmosphere like on site? There were many things I didn't understand during my first month after joining the company, but everyone on the team was kind and helpful in answering any work-related questions. The Osaka office where I work is still a small group of about 30 people, and we can communicate well with people from other divisions. Once a month, we hold lightning talks with office members at our “info-sharing meeting” and we also share ideas to improve our office environment. How did you feel about writing a blog post? I was a little worried because I am not good at writing Japanese, but I think it went well as I was able to reflect on my past two months. [Question from Hoshino] Please tell us if there is any app that you thought "This is excellent!" as a frontend engineer. The pace of technological advancement in frontend development seems fast these days. Many sites are also user-friendly in terms of UI/UX. While I don't have a particular app that I think is the best, I have experience in backend and app development as well as frontend development, and from this perspective, I've recently been interested in Flutter and React Native, which allow me to create without platform restrictions. It has been a few years since they were released, but when I first started developing apps, I had to create Android, iOS, and web apps separately, so eliminating that workload has been a huge help to me as an engineer! YI Self-introduction I am YI from the Operation System Development Group in Project Development Division. My previous job was being a systems integrator (SIer), mainly engaged in B2B system implementation projects for various development projects regardless of industry, as well as frontend and backend. Currently, I am developing a system to handle back office operations related to KINTO ONE used vehicles. How is your team structured? There are 5 people as a used car system team, and about 10 other service providers. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I was surprised that the purchase of expensive software licenses proceeded with only Kageyama-san's (our VPs) approval via Slack, and it was ready for use the next day." What is the atmosphere like on site? I have the impression that there are many people in my age group with diverse backgrounds. How did you feel about writing a blog post? Actually, I had been reading the Tech Blog before joining the company, so I knew about this project somehow, but when I came to write it myself, I thought, "Is it really my turn now?!"That’s how I felt. [Question from Choi] What activities would you like to do outside of work within the company (hobbies, sports, etc.)? I played tennis in high school so I'd like to play with the members of "ktc-tennis club," and also join the activities from the "Golf club" and "Car club" channels in our Slack. I find it really valuable to be able to build connections “horizontally” with colleagues who aren't directly involved in my daily work. So I am looking forward to participate in different activities! HaKo Self-introduction I am HaKo from the Analysis Produce Team, Data Analysis Group. I've worked as a researcher and analyst for research companies and retail companies. I find it interesting to know how people use services and what goes through their mind when they do. How is your team structured? We are a team of nine, including my manager and me. We are formed by teams that were subdivided into smaller teams that were then consolidated into a single one. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I have often worked in environments with older age groups, so I was moved by the lack of rigid "protocols". What is the atmosphere like on site? I feel that everyone has their own specialties and areas of expertise and is very inspiring to see. How did you feel about writing a blog post? It's my first time writing a blog post, but it reminded me of the days when I used to keep a diary on mixi, long time ago. [Question from YI] What has changed since joining KINTO Technologies? There were many projects that I took over soon after joining the company. They are more centered around technical tasks such as creating the email newsletter distribution system, rather than focusing on sales promotion planning or analysis, which had been my main focus until then. yuki.n Self-introduction I'm yuki.n from the New Vehicle Subscription Development Group in the KINTO ONE Development Division. I joined the company in January this year as a frontend engineer. I was assigned to Osaka. I would be happy to be involved in a diverse range of tasks, not only limited to frontend. How is your team structured? As a newly established team, we currently are four including myself, comprising both internal and external members. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I was surprised at how solid the company is in many areas, such as the orientation, company rules, and so on. It was a very new experience for me, partly because it was rare in my past. What is the atmosphere like on site? I gives me a sense of comfort and tranquility in a positive way. All other team members are in Tokyo except myself, but I feel no particular communication barriers. I feel comfortable interacting with them. I am also grateful that I am allowed to do things quite freely, such as being given the chance to try out my own initiatives. How did you feel about writing a blog post? This is my first time writing a blog post for work so I was nervous, but I thought it was a great initiative. [Question from HaKo] Please tell us what surprised or impressed you when you joined KINTO Technologies. It overlaps with what I mentioned before, but even though I just joined the company, I am pleased that the team have accepted my concepts and "what I want to do." I was surprised and impressed at the same time. Kiyuno Self-introduction I am Kiyuno from the Project Promotion Division, Project Development Division. I was assigned to the frontend development of KINTO FACTORY. I work at the Muromachi Office. How is your team structured? We are six, including myself, all working on frontend development. I want to keep the title of the youngest engineer in the team. I might even be one of the youngest in the company. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I had the impression that it was laid-back in a good way. There were no surprises in particular, I am happy with the looseness I expected. It is wonderful that they are so accepting of me wanting to try something! What is the atmosphere like on site? I’d say our team is like a cozy little island. While communication within the team is active and individual opinions are respected, the team is introverted and has room for improvement in exerting more external influence. We found this out through the StrengthsFinder assessment. I was also warmly welcomed after joining the company, making it easy for me to quickly get used to the atmosphere. How did you feel about writing a blog post? I had been tasked with posting tech blogs in my previous job, so I wasn't too concerned about it. Since I'm a naturally shy person, I feel anxious about self-disclosure, but I would be happy if this article sparks your interest in our organization. [Question from yuki.n] Please tell us about what you are currently interested in or pursuing in terms of technology! I am delving into the field of 'prompting skills' to optimize output in tools like ChatGPT. This also comes in handy when using "Sherpa," which is the ChatGPT-based AI language model that we use internally at KINTO Technologies. K Self-introduction I am K from the Project Promotion Division, Project Development Division. I am in charge of Salesforce development and work at the Muromachi Office. My previous job was being a systems integrator (SIer), and I was involved in multi-cloud system implementations regardless of industry. How is your team structured? There are 4 people as a Salesforce team, and about 10 other business partners. What was your first impression of KINTO Technologies when you joined? Were there any surprises? My first impression was that there were many technical study sessions. What is the atmosphere like on site? There are a lot of experienced engineers, and I noticed that they are actively learning new technologies. How did you feel about writing a blog post? I believe that writing for the KINTO Technologies' Tech Blog will be a valuable experience. [Question from Kiyuno] What is the most important mindset in development? I think it is important to be flexible in order to adapt to new situations and deal with evolving technology and changing project requirements. It requires the ability to calmly deal with issues as they arise and find effective solutions. I believe it's important to pursue both creative solutions and routine problem-solving Mukai (mt_takao) Self-introduction My name is Mukai (mt_takao). I joined the company in December. In my previous job, I was primarily a (digital) product designer and product manager for a BtoB product for a taxi dispatch application. At KINTO Technologies, as in my previous job as product designer, I am in charge of the overall design development of products for Toyota dealers. How is your team structured? I am part of the DX Planning Team, Owned Media & Incubation Development Group, Mobility Product Development Division. Our mission every day is to use the power of digital technology to solve the challenges and difficulties faced by Toyota dealers. What was your first impression of KINTO Technologies when you joined? Were there any surprises? My impression is that the onboarding process at the time of joining the company, including orientation, was much more organized than I expected. I had several opportunities to learn about organizational challenges before joining the company. I made my decision to join after fully understanding them, so there were no major surprises. What is the atmosphere like on site? The DX Planning team where I belong is relatively young and many members have recently joined the company. Despite this, we all share the same attitude of moving forward by drawing upon our individual experiences. How did you feel about writing a blog post? I see strengthening our communication ability as a challenge, both on an individual and organizational level, and I am grateful for the opportunity to do so. [Question from K] Was there a particular design that you considered the best in terms of UI/UX? It is quite difficult to call it the best design, but I've been paying attention lately to the Apple Vision Pro . It appears that technologies expanding into the real world with AR and VR have already started to emerge, and I'm thrilled that this tech has finally become a reality. Reference Review of actual Apple Vision Pro: The world of "using the whole space for work" has come (in Japanese) It seems that it is only available in the U.S. now. I would like to experience it when it becomes available in Japan. As a side note, Productivity Future Vision , which describes the future of Microsoft Corporation, is also similar to the world that Apple Vision Pro envisions. If you're interested, please feel free to take a look. Romie Self-introduction I am Romie. I joined the company in December 2023. I belong to the Mobile App Development Group, Platform Development Division. I began working with embedded systems, moved on to the web, and am currently developing mobile applications. In the field of creating mobile apps, I still have a lot to learn. How is your team structured? It is separated into iOS and Android, and I am on the Android team. We are five including me! Three of us are foreign nationals. We are an international team. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I was amazed by everyone’s speed to proactively catch up with the latest technologies. I was impressed by the robust support provided by the company, and pleasantly surprised to find its corporate culture more liberal than I had expected. What is the atmosphere like on site? I feel that we can talk to each other without hesitation and work at ease. Despite our diverse backgrounds, I feel that we form a well-balanced team with a collaborative dynamic without hierarchies. How did you feel about writing a blog post? Output leads to daily reflection, and the more you transmit information, the more attention you get, so I'd like to continue doing it! [Question from Mukai] What do you want to achieve in KINTO Technologies or in the mobility field? I am in charge of mobile app development, so I want to contribute to KINTO Technologies and the mobility field through the app I am entrusted with. To achieve that, I aim to continuously catch up with the latest technology and work on the growth and development of the products in front of me. Conclusion Thank you all for sharing your impressions of after joining the company! The number of new members of KINTO Technologies is increasing day by day! I hope you look forward to more posts introducing our new members joining us to various divisions. Moreover, A is actively seeking professionals who can collaborate across different divisions and fields! For more information, click here !
アバター
Introduction Hello. I am Shimamura, a DevOps engineer at the Platform Group. At KINTO Technologies, the Platform G (Group) DevOps support team (and SRE team) is working on improving monitoring tools and keeping them up-to-date alongside our CI/CD efforts. Our Platform G also includes other teams such as the System Administrator team, CCOE team, DBRE team, among others. In addition to designing, building and operating infrastructure centered on AWS, Platform G is also responsible for system improvement, standardization and optimizations of the entire company. Among these, we introduced an APM mechanism using the Amazon Managed Service for Prometheus (hereafter Prometheus), X-Ray, and Amazon Managed Grafana (hereafter Grafana), that became GA last year, which is the reason I decided to write this article. Background When I joined KINTO Technologies (at that time, part of KINTO Corporation) in May 2021, we were conducting monitoring of AWS resources and log wording. However, this was done using CloudWatch, and the Platform G team was responsible for the design and setup. At that time, the metrics for application operations were not acquired. Log monitoring was also less flexible in configuration, and error detection primarily relied on AWS metrics/logs, or passive detection and response through notifications from external monitors. In terms of the maturity levels commonly referred to in O11y, we were not even at Level 0: “to implement analytics”. However, we were aware of this problem within our team, so we decided to start implementing APM + X-Ray as a starting point for measurement. Here is a reference to the O11y maturity model Element APM (Application Performance Management) To manage and monitor the performance of applications and systems. Also known as application performance management. By examining the response time of applications and systems, as well as component performance, we can understand the overall operational status of applications. This helps us to quickly identify bottlenecks causing system failures, and we can use this information to make improvements. X-Ray A distributed tracing mechanism provided by AWS capable of: Providing system-wide visibility of call connections between services Visualization of the call connections between services in a specific request (visualization of the processing path of a specific request) Quickly identifying system-wide bottlenecks Task (Action) Considerations I first thought about tackling the above mentioned level 0 requisites to implement analytics. During the implementation phase, the idea of using 'Prometheus + Grafana' was introduced. Since it was being previewed as a managed service on AWS at that time, we decided to go with this option. While there are some other commonly used SaaS, such as Datadog, Splunk, NewRelic, DynaTrace, we decided to use AWS without considering the prerequisites. Later on, I began to understand why these SaaS offerings were not being used. I will delve deeper into these reasons later. Implementation Prometheus As for the metrics output to Prometheus: I summarized it in an article titled Collecting application metrics from ECS for Amazon Managed Service for Prometheus which was created as an advent calendar article in 2021 in KINTO Technologies. X-Ray Taking over the documents of the team members at the time of consideration, we organized and documented the incorporation of AWS X-Ray SDK for Java into ECS task definitions, etc., based on the AWS X-Ray SDK for Java . Initial Configuration Improvements OpenTelemetry in X-Ray SDK The team that started using Java17 reached out with concerns about the ServiceMap not displaying correctly in X-Ray. If you look closely, AWS X-Ray SDK for Java declares support for Java8/11, but not for Java17. I decided to move to AWS Distro for OpenTelemetry Java as it currently seems to be recommended. One of the benefits is that it can operate together with the APM Collector. Java Simply download the latest Release jar file from AWS-observability/AWS-otel-Java-instrumentation and save it under src/main/jib to deploy. The SDK for Java also included a definition file for sampling settings, which gives the impression that the introduction is simplified. Environment Variables for ECS Task Definitions Add the Agent's definition to JAVA_TOOL_OPTIONS. We have also added an environment variable for OTEL. Check the json in the task definition in ECS { "Name": "JAVA_TOOL_OPTIONS", "Value": "-Xms1024m -Xmx1024m -XX:MaxMetaspaceSize=128m -XX:MetaspaceSize=128m -Xs512k -javaagent:/AWS-opentelemetry-agent.jar ~~~~~~~ " }, { "Name": "OTEL_IMR_EXPORT_INTERVAL", "Value": "10000" }, { "Name": "OTEL_EXPORTER_OTLP_ENDPOINT", "Value": "Http://localhost:4317" }, { "Name": "OTEL_SERVICE_NAME", "Value": "Sample-traces" } The above is how it will look. (Although it may look a bit different in reality because we use Parameter Store etc.) OpenTelemetryCollector's Config Using Configuration as a reference, modify the Collector's Config as follows: It's in a form where both APM and X-Ray are contained, with metrics labeled for each task. Please note that "awsprometheusremotwrite" used as an exporter has been deprecated since v0.18 of AWS-otel-collector, and the function has been removed from v0.21, so "PrometheusRemoteWrite" "Sigv4Auth" will be used. Receivers: Otlp: Protocols: GRPC: Endpoint: 0.0.0.0:4317 HTTP: Endpoint: 0.0.0.0:4318 Awsxray: Endpoint: 0.0.0.0:2000 Transport: UDP Prometheus: Config: Global: Scrape_interval: 30s Scrape_timeout: 20s Scrape_configs: - Job_name: "KTC-app-sample" Metrics_path: "/actuator/Prometheus" Static_configs: - Targets: [ 0.0.0.0:8081 ] Awsecscontainermetrics: Collection_interval: 30s Processors: Batch/traces: Timeout: 1s Send_batch_size: Resourcedetection: Detectors: - Env - ECS Attributes: - Cloud.region - AWS.ECS.task.arn - AWS.ECS.task.family - AWS.ECS.task.revision - AWS.ECS.launchtype Filter: Metrics: Include: Match_type: Strict Metric_names: - ECS.task.memory.utilized - ECS.task.memory.reserved - ECS.task.CPU.utilized - ECS.task.CPU.reserved - ecs.task.network.rate.rx - ecs.task.network.rate.tx - ECS.task.storage.read_bytes - ECS.task.storage.write_bytes Exporters: . Awsxray: Awsprometheusremotwrite: Endpoint: [apm endpoint] AWS_auth: Region: "Us-west-2" Service: "APs" Resource_to_telemetry_conversion: Enabled: Logging: Loglevel: Warn Extensions: Health_check: Service: Telemetry: Logs: Level: Info Extensions: Health_check Pipelines: Traces: Receivers: [Otlp,awsxray] Processors: Batch/traces Exporters: . [awsxray] Metrics: Receivers: [prometheus] Processors: [resourcedetection] Exporters: . [Logging, awsprometheusremotwrite] Metrics/ECS: Receivers: [awsecscontainermetrics] Processors: [filter] Exporters: . [Logging, awsprometheusremotwrite] Current Configuration Being put into use This step was especially difficult. As mentioned in the introduction, I have been evaluating and providing tools to other teams. It might seem unconventional, but I wanted to optimize tools with a holistic view from the point of view of a DevOps practitioner. That is why I belong to the Platform G, which works across the entire organization, facilitating cross-functional activities. As result, I find myself often in this situation: Platform G = the party that sees issues People in charge of applications = the party unaware of issues But recently, through our consistent dedication, I think people have come to understand the importance of our efforts. A case study where we didn't use SaaS The following are my personal reflections. There's a general perception that SaaS solutions, especially those related to O11y tend to accumulate large amounts of data, leading to high overall costs. Paying a significant amount for “unused” tools until their utility is understood remains challenging in terms of cost effectiveness. As you progress towards actively addressing O11y's maturity level 2, there will be a demand to oversee bottlenecks and performance from a bird’s-eye view, which is where the value of using them may emerge. Connecting logs and metrics to events etc. Even if tools are divided, I think it is acceptable for each person's task load and amount of passive response. It can't be helped that Grafana's Dashboard can only be created temporarily. If the cost of SaaS is lower than that of maintaining a dashboard, then migration will happen. Or so I think. Impressions Grafana, Prometheus, and X-Ray are managed services and are not as easy to deploy as SaaS, but they are relatively inexpensive in terms of cost. In the early stages of DevOps and SRE efforts, it may be worth considering this aspect when deploying O11y. I've heard concerns about using SaaS, but after adopting it, I now appreciate the value of O11y, of reviewing improvements and activities, and comparing costs before starting to use various SaaS. Overall, I feel positive about it. Tools like Datadog, New Relic dashboards, or HostMap will offer visually appealing designs, giving you a sense of active monitoring as you see data dynamically represented (`・ω・´) I mean, why not!! They look so cool!
アバター
​ はじめに 前回はパート1として、バリアブル機能、商品数の増減機能、小計の設定についてご説明しました。 今回はその続きで、カート内の商品数を2つに増やし、その上で小計を設定する方法、送料無料の条件を設定する方法、合計金額を出す方法、そして送料無料になった際に表示するメッセージを変更する方法について説明します。 Figmaのバリアブル機能を使ってショッピングカートのモックアップを作ってみよう!パート2 ![](/assets/blog/authors/aoshima/figma2/1.webp =300x) ショッピングカート 完成図 【パート1】 バリアブルとは パーツ作成 まずはカウントアップ機能 変数の作り方、割当方 カウントアップ機能の作成 小計設定 ​ 【パート2】 商品を2つに増やしてみる​ 小計の設定 送料無料設定 合計の設定 送料無料の文言に変更 完成 商品を2つに増やしてみる 商品をカートに2つ入っている状態にするために、まずはパート1で扱った商品情報をコピーします。それから、商品の写真、名前、価格、そしてカート内の商品点数を示す数字を更新します。(複製方法についてはコンポーネントのバリアント機能を使って商品を追加する方法でももちろん構いません。) ![](/assets/blog/authors/aoshima/figma2/2.webp =300x) 元々の商品を複製し、その際に商品名、商品写真、値段も変更します。 以下の説明では、最初にあった商品(SPECIAL ORIGINAL BLEND)を「商品A」とし、新たにコピーして追加された商品(BLUE MOUNTAIN BLEND)を「商品B」と呼びます。 さらにこの時、商品A同様に商品Bの個数を表す数字にバリアブル「Kosu2」を割り当て、パート1を参考に商品Bのプラスおよびマイナスボタンにもカウントアップ機能を設定しておきます。 小計の設定 パート1で行った小計設定の応用となります。 バリアブル作成と割当て まずパート2では2つの商品(商品Aと商品B)がそれぞれ1個ずつカートに入っている状況を想定しているため、ローカルバリアブル内の「Shoukei」の値を合計金額の250(商品Aが¥100×1個 + 商品Bが¥150×1個)に更新します。この変更を行うと、このバリアブルに紐づけられているキャンバス上の数字が自動的に更新されて、新しい合計金額が表示されます。 ![](/assets/blog/authors/aoshima/figma2/3.webp =300x) ローカルバリアブルのリスト。赤枠は小計の数字に割り当てるバリアブル。 ![](/assets/blog/authors/aoshima/figma2/4.webp =300x) 小計にローカルバリアブル「Shoukei」の数値が反映されている状態。 ボタンへのアクション入力 パート1では、小計は商品Aのみの合計金額として計算したため、以下の図のように設定しました。 商品Aのプラスまたはマイナスボタンをクリックした際に変化させたいバリアブル「Shoukei」を選択し、そしてその時にどうなるかを表す式として、商品の個数を表すバリアブル「Kosu1」x 100(商品Aの単価)の数値を記入しています。 ![](/assets/blog/authors/aoshima/figma2/5.webp =300x) パート1で設定した小計設定の式 今回はこの基本に従って小計が商品Aと商品Bの個数の合計金額となるように下図の様に式を商品A・Bのプラス・マイナスボタンに設定します。 ![](/assets/blog/authors/aoshima/figma2/6.webp =300x) 商品Aのプラスボタンの設定内容。点線で囲まれた部分が小計の設定範囲。実践赤枠 左が商品A、右が商品Bを表しています。 この設定により、プラスおよびマイナスボタンを押すたびに小計が計算され更新されるようになります。プレビュー画面でボタン操作を試すと、2つの商品の合計金額が小計に正しく反映されていることが確認できます。 ![](/assets/blog/authors/aoshima/figma2/7.gif =300x) 送料無料の設定 次に「¥1,000以上のお買い上げで送料無料!」となる設定方法を説明します。 設定する送料の条件は以下の通りです。 1.小計が¥1,000未満の場合、送料として¥500が加算されます。 2.小計が¥1,000以上の場合、送料は無料になります。 バリアブル作成と割当て まず送料を表す数字にバリアブルを割り当てます。 このモックアップでは初期状態のカートには商品がそれぞれ1点ずつ入っており、小計は¥250、送料は¥500を想定していますので、新規作成するバリアブルの名称を「Shipping」とし、値を500に設定し送料横の数字に割り当てます。 ![](/assets/blog/authors/aoshima/figma2/8.webp =300x) 送料横の数字にバリアブル「Shipping」を割り当てた状態 ボタンへのアクション入力 次に小計を計算するボタンアクションの設定を行います。 小計の金額が¥1,000未満かそれ以上かといった条件により、結果として送料金額が分岐することになりますので、if文を使用します。 小計が¥1,000未満の場合送料は¥500なので、以下のように表すことができます。 ![](/assets/blog/authors/aoshima/figma2/9.webp =300x) この式は小計が¥1,000未満の場合「Shipping」の値を500にするということを意味しています。 ちなみに「Shipping」には元々500の値を設定しているので、わざわざ同じ値を入れる必要があるのか疑問に思うかもしれません。しかしこの設定をしておくと、小計が¥1,000以上になって送料が¥0に設定された後、もし小計が再度¥1,000未満になる場合、送料を¥0から¥500に戻すことが可能になります。 続いて小計が¥1,000以上の場合送料は¥0となりますので、以下の赤枠内のように表すことができます。 ![](/assets/blog/authors/aoshima/figma2/10.webp =300x) こちらは小計が¥1,000以上の場合、「Shipping」の値を0にするということを意味しています。 ちなみに「else」は、「if」で設定された条件以外の場合を指します。 この場合「if」が¥1,000未満の状況を指すので、「else」はそれ以外、つまり¥1,000以上の場合を意味します。 各ボタンへ以上の設定を行いプレビューすると、小計が¥1,000を超えた時点で送料が「¥0」と表示されることが確認できます。このように設定することで、小計の金額に応じて送料が自動的に調整されるようになります。 ![](/assets/blog/authors/aoshima/figma2/11.webp =300x) 小計が¥1,000を超えると送料が¥0に 合計の設定 つづいて合計金額の設定へと移ります。 バリアブル作成と割当て 合計金額を示すバリアブルは、「Total Amount」を略して「T_Am」とします。 繰り返しで恐縮ですが、このモックアップではカートに商品AとBがそれぞれ一つずつ入っており、その小計が¥250、送料が¥500である状態を想定しています。従って合計金額の初期値である750を「T_Am」に設定します。 合計金額を示す数字にバリアブル「T_Am」を割り当てることで、「750」という値が表示されます。 ![](/assets/blog/authors/aoshima/figma2/12.webp =300x) 合計金額にバリアブルを割り当てた状態 ボタンへのアクション入力 合計金額についても小計が¥1,000未満かそれ以外かによる条件分岐の設定が必要となります。 条件は送料の設定と同じになるので、アクション設定を追加していきます。 if文の横をマウスオーバーすると「ネストされたアクションを追加」という文言とともに「+」ボタンが出てきますので、そのままボタンを押下すると追加設定用のスペースが出現します。 一つの条件でいくつもアクションを追加したい場合はこの様に追加することが可能です。 ![](/assets/blog/authors/aoshima/figma2/13.webp =300x) 小計が¥1,000未満の場合、合計金額 = 小計 + 送料となり以下の赤枠内のような記述になります。 ![](/assets/blog/authors/aoshima/figma2/14.webp =300x) 一方小計が¥1,000以上の場合は合計金額 = 小計(+送料¥0)となり以下の赤枠内のような記述になります。こちらは「else」内の記述となりますとなりますので、ご注意ください。 ![](/assets/blog/authors/aoshima/figma2/15.webp =300x) 各ボタンへ設定を行いプレビューすると、小計が¥1,000に達した時点で送料が¥0になり合計金額にもそれが反映されている様子がで確認できます。 ![](/assets/blog/authors/aoshima/figma2/16.webp =300x) 送料無料の文言に変更を加える 最後に送料無料の文言に変更を加えていきます。 ここでは送料が無料になったらヘッダーの下にある送料無料の文言(赤枠部分)を非表示にしたいと思います。 ![](/assets/blog/authors/aoshima/figma2/17.webp =300x) バリアブル作成と割当て 表示 / 非表示などを切り替える場合、よく使われるのがブーリアンバリアブルです。ブーリアンとは「真・偽(true・false)」「はい・いいえ」などの二者択一の条件を表すために使用されるデータの型です。 ちなみに今回の様に表示 / 非表示の切替えでは、Figmaでは自動的に「true」= 表示、「false」= 非表示の設定がほどこされますので、その設定をそのまま使用します。 まずローカルバリアブルを開き、バリアブル作成ボタンを押します。その際にデータ型として「ブーリアン」を選択します。名称は送料に関連する文言なので「Ship_Txt」としました。 カートの初期状態では小計は¥1,000未満となり、文言は表示される必要があるため初期値は「true」とします。 ![](/assets/blog/authors/aoshima/figma2/18.webp =300x) ローカルバリアブルでブーリアンを作成し、初期値をtrueとした状態 次は、作成したバリアブルを割り当てる手順を説明します。 まず、バリアブルを割り当てたいオブジェクトをキャンバス上で選択します。次に、画面右側のパネルにある「レイヤー」セクションを見て、パススルー(透過)の横にある「目」のアイコンを右クリックします。このアイコンは直接的には表示されないため、見つけにくいかもしれません。 右クリックすると、割り当て可能なバリアブルのリストがドロップダウンメニューとして表示されます。そこから、先に作成したバリアブルを選択します。 ![](/assets/blog/authors/aoshima/figma2/19.webp =300x) ボタンへのアクション入力 文言の表示 / 非表示についても小計金額よる条件分岐の設定をするため、アクション設定を追加していきます。 小計金額が¥1,000未満の場合は文言を表示(「Ship_Txt」= true)するので、下図のような記述を追加します。 ![](/assets/blog/authors/aoshima/figma2/20.webp =300x) ブーリアンバリアブル「Ship_Txt」を「true」へ変化させる記述内容 一方で、小計金額が¥1,000以上の場合は文言を非表示(「Ship_Txt」= false)とするので、以下のような記述を追加します。こちらは「else」内の記述になりますのでご注意ください。 ![](/assets/blog/authors/aoshima/figma2/21.webp =300x) ブーリアンバリアブル「Ship_Txt」を「false」へ変化させる記述内容 各ボタンへ設定を行い、プレビューを実行すると小計が¥1,000に達した時に文言部分が非表示になる様子がで確認できます。 ![](/assets/blog/authors/aoshima/figma2/22.webp =300x) 無事に文言を非表示にすることができました。 しかし余分なスペースが空いてしまいレイアウトの観点からあまり良いとは思えないので、文言自体を変更する方法も試みたいと思います。 送料無料の文言に変更を加える ver.2 バリアブル作成と割当て では送料の有り / 無しを想定して、文言は以下の2種類が入れ替わる設定を行っていきます。 ¥1,000未満の場合は、「あと◯◯円のお買い上げで送料無料!」 ¥1,000以上の場合は、「送料無料!」 送料無料の文言部分を「Ship_Txt_Panel」という名称でコンポーネント化し、バリアントを2種類作成しました。切り替えて使いたいので、それぞれのプロパティにブーリアン値を入力します。 ![](/assets/blog/authors/aoshima/figma2/23.webp =300x) まずは上のバリアントを選択し、画面右側のパネルのプロパティ変更セクションを表示させます。 こちらのバリアントは初期状態で表示される想定なので、こちらを「true」に設定します。 ![](/assets/blog/authors/aoshima/figma2/24.webp =300x) 次に下のバリアントのプロパティをfalseに設定します。 ![](/assets/blog/authors/aoshima/figma2/25.webp =300x) プロパティの設定を終えたら、デザイン上にコンポーネントのインスタンスに配置します。その時にインスタンスを選択した状態で右側のパネルを確認すると、インスタンスセクションにブーリアンのトグルスイッチが表示されます。 ![](/assets/blog/authors/aoshima/figma2/26.webp =300x) デザイン上に配置されたインスタンスを選択中 ![](/assets/blog/authors/aoshima/figma2/27.webp =300x) 右側のパネルのトグルスイッチがtrueの状態 このトグルスイッチを切り替えるとインスタンスの内容が切り替わり、ブーリアン値の設定ができていることが確認できます。 ![](/assets/blog/authors/aoshima/figma2/28.webp =300x) 右側のパネルのトグルスイッチをfalseに切り替える ![](/assets/blog/authors/aoshima/figma2/29.webp =300x) インスタンスの内容が切り替わります。 さらにここでトグルスイッチにカーソルをマウスオーバーすると、フロートテキストと共にバリアブル割当て用のアイコンが出現しますので、そのままクリックすると候補のバリアブルが出現しますので、その中からブーリアンバリアブル「Ship_Txt」を選択し、インスタンスに割り当てます。 ![](/assets/blog/authors/aoshima/figma2/30.webp =300x) 赤枠のアイコンをクリックするとバリアブルの候補が出現します。。 ![](/assets/blog/authors/aoshima/figma2/31.webp =300x) インスタンスにバリアブルが割り当てられた状態。 ボタンへのアクション入力 ここでのボタンアクションの記述は、先程送料無料の表示 / 非表示を切り替えるために設定した内容と同じですので、記述の修正などは必要ありません。 早速プレビューしてみると、小計が¥1,000を超えたところで文言が変更されているのが確認できます。 ![](/assets/blog/authors/aoshima/figma2/32.webp =300x) では最後に文言内の金額部分が小計に合わせて変化する設定を行っていきたいと思います。 バリアブル作成と割当て コンポーネント内の文言自体を修正し、可変となる金額部分と残りの文言部分を分けます。 ![](/assets/blog/authors/aoshima/figma2/33.webp =300x) 可変となる金額部分がハイライトされている状態 次に可変部分に割り当てるバリアブルを作成します。データ型は「数字」を選択し名称は「Extra_Fee」とします。このバリアブルの値は、送料無料の条件となる¥1,000までの差額を示しています。従ってカートの小計が¥250なので、¥1,000 - ¥250 = ¥750となることから「Extra_Fee」の値を「750」に設定します。 ![](/assets/blog/authors/aoshima/figma2/34.webp =300x) バリアブルを可変部分の数字に割り当てると以下のようになります。 ![](/assets/blog/authors/aoshima/figma2/35.webp =300x) ボタンへのアクション入力 この可変部分が小計の増減に応じて変化するように、以下のように設定します。ちなみに小計が1,000以上の場合、こちらの文言自体切り替わってしまいますので設定は不要となります。 ![](/assets/blog/authors/aoshima/figma2/36.webp =300x) 完成 各ボタンに設定を行いプレビューを実行すると、プラス(マイナス)ボタンの押下に応じて文言内の金額が変化すること、小計金額が¥1,000を境に中の文言が変化することが確認できます。 ![](/assets/blog/authors/aoshima/figma2/37.webp =300x) 「Figmaのバリアブル機能を使ってショッピングカートのモックアップを作ってみよう!」の説明は以上になります。説明の過程で紹介した機能は様々な場面で応用が効くと思われますので、ご活用して頂けるとありがたいです。
アバター
Introduction Hello, I am Sugimoto from the Creative Office. This article is the second part in our two-part series introducing the creation of our mascot character design. Our first post detailed our journey since we received the request to its conceptualization. Firstly, every employee is a part of the path we have been working on. Secondly, the mascot character project (referred to as “the PJ” from now on) has chosen its selection based on KINTO's vision and brand personality, as well as future developments and branding, rather than simply focusing on popularity. Thirdly, a survey was conducted among all employees, utilizing character concepts volunteered by employees. The purpose of the poll was to discuss the popularity of ideas that revolved around the motif of “clouds”, which is also where KINTO’s corporate name came from. Specifically, the following characters were the most popular. The left one, for its ability to shape-shift at will, while in the case of the right one, its charm was that it’s a cloud transformed into a car. By the way, as a manager, I feel relieved that both ideas from the Creative Office were selected. These two proposals gained popularity. Bringing Life to a Cloud Motif 1. Not all deliverables should be handled in-house! Based on the above two proposals, next up was illustrating. People often assume that all designers can take pictures, make videos, and even draw illustrations. I frequently get questions like, "Since there is no money to outsource, can we do this in-house?" or "Can't you just use AI to make it quickly?" Among our in-house designers, of course, there are team members who are good at illustrating. However, what is important here is to discern when to delegate certain tasks to specialists, like in the Japanese expression to "leave mochi-making to mochi shops". We are breathing life into a character here! We need to make a distinction between a well-drawn illustration and an illustration that brings it to life. As a creator myself, I strongly believe that the most respectful approach is to seek input from specialists in illustration and character design when aiming to create something of quality. I would say that this is an example of deliverables that cannot and should not be done in-house. 2. Then what should we do? Outsourcing illustration Luckily, despite having budget, the business side of the PJ team was filled with team members who genuinely respected the creation process. They did not simply say, "designers can draw illustrations too so they should do it" and instead, they worked hard to increase the production budget for the illustrations. We decided to rely on Steve* Inc. , a creative company that specializes in branding and planning design for different companies, products, and communities. They worked with us to create a story that brought the character to life while staying close to the concept that PJ wanted to uphold. What we, the Creative Office, requested Steve* Inc. was a character that even adults would want to have, for example when made into merchandise. We requested a tone that is cute, adorable, while also appealing to adults. Based on our request, they provided three proposals for characters with a cloud motif. A: The Mysterious Creature K, B: Haguregumo, and C: Kumorisu (the squirrel-cloud). With expressions that made the viewer want to protect them, they all appeared to be watching from above—indeed, Steve* Inc. did a great job! All the PJ members listened to the presentation with excitement. Next, we conducted a survey asking all employees to share the good points and their concerns about each of the proposals. By conducting this survey and judging from multiple perspectives, we were able to delve deeper beyond just the appearance and gain insights into the character's potential issues. As a result of the survey, we decided on proposal A, "The Mystery Creature K"! "Mysterious Creature K": I’m Sure You’ll Ask, “Is This Really Its Name”? Wouldn't it be interesting to take advantage of this impactful name and proceed with promotions that left a sense of "mystery" in the name and its existence? This is why the meetings with marketing and social media staff became also very exciting. Now that the form was decided, the next step was to polish it up. Additionally, it was renamed, written in hiragana, to enhance readability and familiarity. The "Mysterious Creature K" Starts to Take Shape! I think that the somewhat absentminded look on "Mysterious Creature K" is also pretty good. However, we continued to refine its form and facial features to ensure it will be beloved for years to come. Specific examples: We wanted to give it a closer form to the alphabet "K" (as some suggested that at a quick glance, it might not be immediately recognizable as such). We wanted to give it a little more of a cloud-like appearance (as it initially resembled hand soap bubbles or marshmallows). We wanted to slightly adjust the balance between the eyes and the body, considering the cloud motif. (A bit more of a sense of balance, like the early illustrations of Steve* Inc., where the clouds look bigger.) We wanted to create a little more difference between the white parts of eyes and the "cloud" of the 3D version of K's eyes (only the black parts were visible, so we wanted to adjust the edge lines and shadows to make them look a little more like 2D K's eyes and keep the cuteness. We also thought that a matte texture might be more suitable.) We wanted to add more brand colors to the 3D version of K (on the black parts of the eyes, body shadows, etc.) In addition to the 2D illustrations, we decided to make 3D ones as well to ensure they seamlessly integrate with vehicle images. And what about the fluffiness? While it may work well in illustrations, how will it translate into a costume? What color should the eyes be? Should the character have no mouth and remain silent, refraining from engaging in sales talk? These were all things we considered as we developed the character's personality and characteristics. And this is its current form and expression! Its naming campaign was launched in July 2023 and received a total of 932 submissions. Among them, we grouped the options based on different criteria ( see Part 1 ) to determine the best name. Kumo no Kinton Kumobii Mysterious Creature K K Although both "Kumo no Kinton" and "Kumobii" were popular among our customers (KINTO subscribers in Japan), "Kumobii" emerged as the most popular choice among the target generation of teens to 30-year-olds, and internal voting confirmed its first-place ranking. Hence, we opted for the name "Kumobii." The name Kumobii is derived from the combination of "Kumo (cloud)" and "mobility," which is very KINTO-like, and the PJ members were satisfied with this name. This is how "Kumobii" was born. I believe that there will be more opportunities for its exposure in company promotions from now on. We're excited for you to see the promotions! Check out the unique features of "Kumobii"! ▼ Click here for the story of Kumobii ▼ @ card
アバター
はじめに こんにちは。 KINTOテクノロジーズ モバイルアプリ開発グループの中口です。 2024年5月23日に開催された TechBrew in 東京 モバイルアプリの技術的負債に向き合う に参加してきましたので、その様子をレポートします。 当日の様子 会場は オフィスが新しくなったFindyさん です。 噂には聞いていましたが、とても広くて綺麗なイベントスペースがありテンションが上がりました😀 また、TechBrewという通りお酒や軽食もたくさん用意されており非常に和やかな雰囲気でした。 ただ、私はこの後LTの登壇があったため発表が終わるまではお酒は控えました👍 LT1「Bitkeyのモバイルアプリを進化させるための歩き方」 Bitkeyさんのモバイルアプリの現在に至るまでの歴史をお話しいただきました。 元々React Nativeで作られていたアプリを、ネイティブ化→SwiftUI導入→TCA導入という形で進化させていったとのことです。 ただし、SwiftUI導入は道半ばとのことで失敗だったかもしれないとのことでした。 やはり、iOSのバージョンによってSwiftUIの挙動が変わるため、そこに苦労されたとのことで、私も同じ経験あるなぁと共感いたしました。 LTの中で「自分たちがいいと思うもの全て正解だ」、「当時の判断は、きっとその時正解だった」という発言がとても印象的で、確かにおっしゃる通りだなと思いました。 発表者のあらさんとは、LT後の懇親会でもお話しをする機会があり、Swift on Windowsについてのお話など、私が知らないこともたくさん知っていてとても楽しくお話しさせていただきました。 LT2「モバイルアプリの技術的負債に全社を挙げて取り組む考え方」 技術的負債とは何か?および、それにどう立ち向かうかについてお話しいただきました。 技術的負債の中でも、 認知しているが、リターンを得るために受け入れた 認知しておらずそもそも負債だと気が付いていなかった、または環境変化により負債になったもの これらを分けて考える必要があるとのことでした。 前者は大きな問題にはならないが、後者を放置しすぎてしまうと許容を超える問題が発生する可能性があるとのことです。 技術的負債に立ち向かうためには、ビジネスタスクを一旦止めてでも負債解消の時間をもらう交渉が必要で、負債は開発チームだけでなくステークホルダー含めたみんなの問題と捉えて対応する必要がある、という点とても納得です。特にエンジニアマネージャーやチームリーダーなどは、そういった交渉力が重要だと実感いたします。 また、状況の見える化のため FourKeysを使用しているとのことですが、数字を目標にしすぎることは危険だということでした。 私も常々、チームの開発力の見える化は難しいなと感じておりFourKeysのようなフレームワークに頼りすぎないように注意しております。 LT3「Safie Viewer for iOS での技術的負債との付き合い方」 リリースして10年経つアプリの開発をしているとのことで、そこで抱える悩みやその対応方針についてお話しいただきました。 使っている技術もリリース当時のものが多く残っており、リアーキテクチャしたいものの、致命的な何かが起きていないのも事実で、現状でも多くの追加機能をリリースできている状態とのことです。 そうなると時間をかけたリファクタリングを行う説明ができず、なかなか負債解消に動けない状況とのことでした。 現在は大きく下記の2軸の方針で、できることから対応されているとのことです。 すぐにできる対応はすぐ行う Xcodeの新バージョンが出たら即アップデート バージョンを上げないと書けないコードがある=レガシーコードを生む Danger導入 腰を据えての対応 今はMVC/MVP 非同期処理はクロージャーベース この状態からのリアーキテクチャーはリスキー 新機能のみ、モダン技術の検証をしていこう 実際に取り組むためには、具体的なスケジュールを引く必要あるとのことでなるほどなと感じました。 私も、大きなリファクタリングはかなり躊躇してしまうのでスケジュールをしっかり引いてやり切ることが大事だなと思いました。 LT4「パッケージ管理でモバイル開発を安全に進める」 LT3と同様、こちらも8年と歴史のあるアプリで、共通化や分離に焦点を当ててどのように負債を解消していったかについてお話いただきました。 最近抱える悩みとして、過度な共通化をしている箇所がたくさんあるとのことでした。 例として、Channelのデータに100くらいのパラメータを持っている(登壇者様の表記を流用させていただいています)状態になってしまっているようで、毎回全て使うわけでは無いデータを持ってしまっている、という状況があちこちにあったとのことでした。 一方で責務を分けすぎることも注意が必要とのことです。 一箇所からしか呼ばれていないのに分離されており、やりすぎな状態も散見されるとのことでした。 「考えて共通化する」「考えて責務分けする」ことが大事ということが、とても印象的で私も深く考えず分離していたことがあったような気がしています。。。 そして、これらはPackage Managerを使って管理する方法が良いということで考え方や方法を紹介いただきました。 LT5「GitHub Copilotで技術的負債に挑んでみる」 私の発表になります。 発表内容は こちら です。 XcodeにおけるGitHub Copilotの利用は、まだまだ公式対応しているVScodeなどに比べると制限が多く、利用率が伸びていない状況かと感じております。 一方で、XcodeでもChat機能に関しては技術的負債解消に貢献できると感じたのでその点を発表してきました。 途中、Chat機能を実際にデモしたのですが、会場の視線が一段と集中したのを感じ、皆さんが興味を持って聞いてくれていたようでとても嬉しかったです。 社外のイベントでの登壇は初めてでしたが会場の皆様が暖かく聞いていただけたので、無事発表を終えることができました。 終わりに LT終了後は懇親会があり、たくさんの方と情報交換をさせていただきました。 非常に良い刺激になり、今後もこういった社外イベントへの参加や登壇を積極的に行っていきたいと感じました。 本イベント主催の高橋さんともお話しすることができ、弊社のモバイルグループとFindyさんで何かイベントができたらいいですね、という話をさせていただいたので今後こういったことも積極的に取り組めたらなと考えております。 お土産にFindyさんが作られたIPAをゲットしてきました!
アバター
Unit testing with Flutter Web Hello. I am Osugi from the Woven Payment Solution Development Group. My team is developing the payment system that will be used by Woven by Toyota for the Toyota Woven City . We mainly use Kotlin/Ktor for backend development and Flutter for the frontend. In Flutter Web, errors in test runs can occur when using web-specific packages. Therefore, in this article, I would like to summarize what we are doing to make Flutter Web code testable, with a particular focus on unit testing. If you're interested in reading about the story behind our frontend development journey thus far, feel free to check out this article: A Kotlin Engineer's Introduction to Flutter and Making a Web App Within a Month The Best Practices Found by Backend Engineers While Developing Multiple Flutter Applications at Once What is Flutter Web? First of all, Flutter is a cross-platform development framework developed by Google, and Flutter Web is a framework specialized for web application development . Dart, a Flutter's development language, can convert source code to JavaScript in advance and perform drawing processes using HTML, Canvas, and CSS, allowing code developed for mobile applications to be ported directly to web applications. How To Implement Flutter Web Basic implementation can be done in the same way as mobile application development. On the other hand, what if you need to access DOM manipulations or browser APIs? This is also available in Dart's built-in packages for web platforms such as dart:html ^1 . For example, the file download function can be implemented in the same way as general web application development with JavaScript. :::message The SDK version at the time of writing is for Dart v3.2, Flutter v3.16. ::: The widget below is a sample application with a function that does not know what it is used for, which is to download the counted-up numbers as a text file. The text file will be downloaded by clicking the Floating Button. import 'dart:html'; import 'package:flutter/material.dart'; class MyHomePage extends StatefulWidget { const MyHomePage({super.key, required this.title}); final String title; @override State<MyHomePage> createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { int _counter = 0; void _incrementCounter() { setState(() { _counter++; }); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( backgroundColor: Theme.of(context).colorScheme.inversePrimary, title: Text(widget.title), ), body: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ const Text( 'You have pushed the button this many times:', ), Text( '$_counter', style: Theme.of(context).textTheme.headlineMedium, ), IconButton( onPressed: _incrementCounter, icon: const Icon(Icons.add), ) ], ), ), floatingActionButton: FloatingActionButton( onPressed: () { AnchorElement(href: 'data:text/plain;charset=utf-8,$_counter') ..setAttribute('download', 'counter.txt') ..click(); }, tooltip: 'Download', child: const Icon(Icons.download), ), ); } } Unit Testing Flutter Web Code The test code for the previous sample code is prepared as follows (mostly as it was when flutter create was done and output). import 'package:flutter/material.dart'; import 'package:flutter_test/flutter_test.dart'; import 'package:sample_web/main.dart'; void main() { testWidgets('Counter increments smoke test', (WidgetTester tester) async { await tester.pumpWidget(const MyApp()); expect(find.text('0'), findsOneWidget); expect(find.text('1'), findsNothing); await tester.tap(find.byIcon(Icons.add)); await tester.pump(); expect(find.text('0'), findsNothing); expect(find.text('1'), findsOneWidget); }); } Run the following test command or the above test code from the Testing tab of VS Code. $ flutter test If you run the test as it is, you will probably get an error like the following. Error: Dart library 'dart:html' is not available on this platform. // omitted lib/utils/src/html_util.dart:4:3: Error: Method not found: 'AnchorElement'. AnchorElement(href: 'data:text/plain;charset=utf-8,$data') Apparently, there is something wrong with importing dart:html . Platform-Specific Dart Compiler The official documentation indicates that running the Dart compiler requires: Native platform which includes Dart VM with JIT compiler and AOT compiler for producing machine code. Web platform to transpile Dart code into JavaScript. We can see there are two platforms. In addition, some of the packages available for each platform appear to be different. Platform Available Packages Native dart:ffi, dart:io, dart:isolate Web dart:html, dart:js, dart:js_interop etc. So, it turned out that the above test was running on a VM, and therefore dart:html was not available. Specifying the platform at test runtime is one way to avoid import errors for Web platform packages. You can specify that the test should run on Chrome (as a web) by running the command with the following options ^2 . $ flutter test --platform chrome :::message It can be confirmed that the no-option test is on a VM with flutter test --help --verbose . --platform Selects the test backend. [chrome] (deprecated) Run tests using the Google Chrome web browser. This value is intended for testing the Flutter framework itself and may be removed at any time. [tester] (default) Run tests using the VM-based test environment. ::: Should Flutter Web Test Code Run on Chrome? When developing web applications, using the browser API is inevitable, but should the Flutter Web test code be run on Chrome? In my personal opinion, it is better to avoid using Chrome as much as possible. The reasons are: Running tests requires Chrome to be launched in the background, which increases test launch time. Chrome must be installed in the CI environment, which increases the container size of the CI environment. Or it may take a long time to set up containers, which would considerably increase the monetary cost of a CI environment. (Of course, if you just want to do a quick local check or if you are a wealthy person, no problem!) In fact, I have included the results of running the local environment comparing the standard case (Native) with no platform specified and the case (Web) with Chrome specified. Platform Program run time (sec) Total test run time (sec) Native 2.0 2.5 Web 2.5 9.0 From the table above, the Web actually took significantly longer to launch the test. You will also notice that test run time has also increased by about 25%. ![tester](/assets/blog/authors/osugi/20240301/annoying.png =400x) Separate Web Platform-Dependent Code Can the above error be avoided without specifying a web platform? In fact, Dart also offers conditional imports and exports for packages, along with flags to determine whether the platform is Web or Native ^3 . Flag Description dart.library.html Whether a Web platform dart.library.io Whether a Native platform These can be used to avoid errors. First, prepare the download function module for Web and Native as follows, and separate the aforementioned web package usage part from the code to be tested. import 'dart:html'; void download(String fileName, String data) { AnchorElement(href: 'data:text/plain;charset=utf-8,$data') ..setAttribute('download', fileName) ..click(); } void download(String fileName, String data) => throw UnsupportedError('Not support this platform'); Here is how to switch the import of the above module for each platform. import 'package:flutter/material.dart'; - import 'dart:html' + import './utils/util_io.dart' + if (dart.library.html) './utils/util_html.dart'; class MyHomePage extends StatefulWidget { // omitted } class _MyHomePageState extends State<MyHomePage> { // omitted @override Widget build(BuildContext context) { return Scaffold( // omitted floatingActionButton: FloatingActionButton( onPressed: () { - AnchorElement(href: 'data:text/plain;charset=utf-8,$_counter') - ..setAttribute('download', 'counter.txt') - ..click(); + download('counter.txt', _counter.toString()); }, tooltip: 'Download', child: const Icon(Icons.download), ), ); } } If you want to export, you will have to prepare a separate intermediary file such as util.dart and import it from the Widget side. (I will omit it here.) export './utils/util_io.dart' if (dart.library.html) './utils/util_html.dart'; You can now run your tests on the Native platform, avoiding errors caused by Web-dependent code. Let's also create stubs for the Native platform for Web platform-dependent external packages Our system uses Keycloak as its authentication infrastructure. The following package is used for Keycloak authentication on Flutter web applications. @ card If you open the link, you'll see this package only supports the web. Thanks to this package, the authentication process was implemented with ease. However, due to the nature of the authentication module, its interface is used in various places. Consequently, all API calls and other widgets that require authentication information are dependent on the web platform, making it impossible to test with CI. (In the meantime, we have been testing locally with the --platform chrome option, and if all passed, it was OK.) In addition, when you import this package, the following error occurs during test execution. Error: Dart library 'dart:js_util' is not available on this platform. Therefore, I will do the same procedure for external packages as the aforementioned import separation, but here I would like to practice the pattern using export. The procedure is as follows. 1. Creating an intermediary package As an example, I have created a package called inter_lib in the sample code package. flutter create inter_lib --template=package In the actual product code, the external package is intermediated by creating a package separate from the product in order to prevent the code according to the external package from being mixed in the product code. I recommend using Melos because it makes multi-package development easy. 2. Creating a stub for the Native platform To create a stub for keycloak_flutter , refer to the Github repository and simulate the interface (Please check the license as appropriate). All classes and methods used on the product code are required. @ card The file created appears as follows. A prefix of stub_ below the src directory is a simulation of the external package interface. inter_lib ├── lib │ ├── keycloak.dart │ └── src │ ├── stub_keycloak.dart │ ├── stub_keycloak_flutter.dart │ └── entry_point.dart Also, entry_point.dart was defined to export the same as the actual external package (In fact, only the interface used in the product code is sufficient). export './stub_keycloak.dart' show KeycloakConfig, KeycloakInitOptions, KeycloakLogoutOptions, KeycloakLoginOptions, KeycloakProfile; export './stub_keycloak_flutter.dart'; To internally publish this inter_lib as a package, configure export as follows. library inter_lib; export './src/entry_point.dart' if (dart.library.html) 'package:keycloak_flutter/keycloak_flutter.dart'; 3. Add he intermediary package to dependencies in pubspec.yaml Add a relative path to inter_lib to pubspec.yaml . // omitted dependencies: flutter: sdk: flutter cupertino_icons: ^1.0.2 + inter_lib: + path: './inter_lib' // omitted Then, replace the original reference to an external package with inter_lib . - import 'package:keycloak_flutter/keycloak_flutter.dart'; + import 'package:inter_lib/keycloak.dart'; import 'package:flutter/material.dart'; import 'package:sample_web/my_home_page.dart'; void main() async { WidgetsFlutterBinding.ensureInitialized(); final keycloakService = KeycloakService( KeycloakConfig( url: 'XXXXXXXXXXXXXXXXXXXXXX', realm: 'XXXXXXXXXXXXXXXXXXXXXX', clientId: 'XXXXXXXXXXXXXXXXXXXXXX', ), ); await keycloakService.init( initOptions: KeycloakInitOptions( onLoad: 'login-required', enableLogging: true, checkLoginIframe: false, ), ); runApp( const MyApp(), ); } The above outlines the process of creating a stub for the Native platform of a Web platform-dependent external package. Now the test can be run in VM. This method can of course be applied in addition to the keycloak_flutter used in this example. ![successful people](/assets/blog/authors/osugi/20240301/success.png =480x) Summary This article summarized our approach to maintaining Flutter Web code testable. Dart's execution environment includes a Web platform and a Native platform flutter test is a native platform execution, and if using a package for a web platform such as dart:html would cause an error It can be solved with an implementation that switches between the real package and stub for each platform, utilizing the dart.library.io and dart/library/html flags
アバター
Introduction I am Hand-Tomi and I work on developing my route for Android at KINTO Technologies. It has been almost a year since Android 14 was released on April 12, 2023. However, I feel that the concept of "Regional Preferences" on Android remains unclear to many. That is why in this article I've chosen to delve into this topic. Developing multilingual applications without understanding "Regional Settings" can lead to the risk of encountering unforeseen bugs. I hope this article will be of help to readers mitigating these risks. Key Points Covered in This Article Locale.getDefault() == Locale.JAPAN :::details Code description Locale : Classes representing specific cultural and geographic settings based on language, country, or region Locale.getDefault() : Returns the default Locale for the current application Locale.JAPAN : Instances of locale representing Japanese language ( ja ) and country ( JP ) Preferences ::: Does the above code output true if the device is set to Japanese (Japan)? Or does it output false ? The correct answer is true on Android 13 and below, and unknownfor Android 14 and above with this much information . This article explains why it is unknown for Android 14 and above ! What is Locale on Android? Locale is a class that represents a cultural or geographic setting based on language, country, or region. Using this information, Android applications can be configured to adapt applications to diverse users. Locale deals mainly with languages and countries, but more data can be extracted by using LocalePreferences . val locale = Locale.getDefault() println("calendarType = ${LocalePreferences.getCalendarType(locale)}") println("firstDayOfWeek = ${LocalePreferences.getFirstDayOfWeek(locale)}") println("hourCycle = ${LocalePreferences.getHourCycle(locale)}") println("temperatureUnit = ${LocalePreferences.getTemperatureUnit(locale)}") If you execute the above code on a device with "Japanese (Japan)" Preferences, it will appear as follows. calendarType = gregorian : Calendar method = Gregorian calendar firstDayOfWeek = sun : First day of the week = Sunday hourCycle = h23 : Time cycle = 0-23 temperatureUnit = celsius : Temperature = Celsius What is "Regional Preferences"? Introduced in Android 14, the "Regional Preferences" feature allows you to customize the "temperature" and "First day of week" set by Locale (language and country). Temperature Use app default Celsius (°C) Fahrenheit (°F) First day of week Use app default Monday to Sunday Temperature setting screen First day of the week screen :::details How to go to the settings The "Regional Preferences" screen can be accessed from the "System" > "Language" section within the "Preferences App." ![setting](/assets/blog/authors/semyeong/2024-02-28-regional-preferences/setting.png =300x) ::: Why do we need "Regional Preferences"? Both the "United States" and the "Netherlands" can use English, but the unit of "temperature" used and the "first day of the week" are different. United States Netherlands Temperature Fahrenheit Celsius First day of week Sunday Monday If a Dutch person living in the United States is accustomed to Celsius and wants to change the temperature only to Celsius, "Regional Preferences" can be used to accomplish this. What changes would be made if you set "Regional Preferences"? Locale.getDefault().toString() To check the setting values, let's change each setting while using the code above. Language Temperature First day of week Results Japanese (Japan) Default Default ja_JP Japanese (Japan) Fahrenheit Default ja_JP_#u-mu-fahrenhe Japanese (Japan) Default Monday ja_JP_#u-fw-sun Japanese (Japan) Fahrenheit Monday ja_JP_#u-fw-sun-mu-fahrenhe Setting "Temperature" and "First day of the week" resulted in incomprehensible text output such as #u , mu-fahrenhe and fw-sun , which are member variables of Locale and localeExtensions< 5} values. Thus, if a value is set for localeExtensions , the results of hashCode and equals() for Locale will also change. Comparing to Locale.JAPAN does not result in true`. Then, how do we check the language? Locale.getDefault() == Locale.JAPAN // X Locale.getDefault().language == Locale.JAPANESE.language // O If you want to check the language, compare with the language property included in Locale . By using this method, I believe you can get the results you are looking for without being affected by changing the "Regional Preferences." Conclusion It is quite difficult to detect this change, even if the previously working code suddenly stops working due to the "Regional Preferences" feature secretly added from Android 14. Most people will have no problem, but if you are comparing languages in Locale instances, please ensure to check. If as many people as possible can find and solve such bugs quickly, this article will be a great success! Check out other articles written by my route team members! Structured Concurrency with Kotlin coroutines Jetpack Compose in myroute Android App A Beginner’s Story of Inspiration With Compose Preview Thank you for reading my article all the way to the end. *The Android robot was reproduced or modified from work created and shared by Google and used according to terms described in the Creative Commons 3.0 Attribution License.
アバター
Overview I am Cui from the Global Development Group at KINTO Technologies. I am currently the project manager of the Global KINTO App team, and previously the project manager for the back-office system developed by the Global Development Group. In this article, I will talk about Gitflow, a branch management method implemented within our back-office system development team to manage our source code. I think it can be applied to other products as well, so I hope this article serves you as a reference. Gitflow Note: In this article, I will only talk about Gitflow, which was adopted by our development team. In the following explanation, the branch name is written as "master," but when using GitHub, master is an old name, so the default branch is now "main." The roles are exactly the same. The overall diagram is as follows: Role of Each Branch master: A branch that manages released source code and has the same source version as the application running in production environment. Each release is tagged. develop: A branch that brings together the developed source code. It includes features that have not yet been released to the production environment and will always have the latest functionality. Typically, regression tests are deployed and performed on this branch. feature: A branch for development of new or modified features. It branches from the develop, and merges back into the develop branch after completion of integration testing. Generally, one feature branch can be created per user story, but the development team is free to decide. hotfix: A branch for bug fixes after release. It branches from the master branch, deploys to production environment on this branch after fixing bugs and passing tests. After production is completed, the branch is merged with both the master and develop branches. It merges in some release and feature branches as needed. release: A branch for product release. It branches from the develop branch with the feature to be released reflected. This branch is used to the production environment. When production is complete, merge in the master and develop branches and delete the branches. support: This branch is required for projects that must continue to support older versions. The support branch maintains and releases older versions. It is derived from the commit of the master branch of the version that needs support and independently bugfixes and releases until support is terminated. bugfix: In addition to the above five standard branch types, a branch type called bugfix is also defined. Details are described later, but if a bug is found prior to release, a bugfix branch is branched off from the release branch to deal with the fix. Development Flow (1) Initialization Create a develop branch from the master branch. Note: The master and develop branches will always exist as the main Gitflow branches, and once created, they cannot be deleted. (Set up on GitHub) (2) Development of new and modified features 1. Create a feature branch from the develop branch and start developing new and modified features. 2. Feature branch naming convention: feature/xxxx The "xxxx" can be decided by the development team.  Example: feature/GKLP-001, feature/refactoring, feature/sprint15 It is also recommended to create an additional working branch from the main feature branch in order to make pull requests and perform source reviews before integration testing. Specific patterns will be described later. 3. Commit source code revisions in the working branch, and when finished, submit a PR for review by others. 4. Once the source review is complete, merge it into the main functional branch and perform integration testing. 5. Once the integration test is complete, submit a PR to be merged into the develop branch and merge it.   Note: Please always check the merge timing, as there are times when development must not be merged into the develop branch even if development is completed, depending on the release plan. 6. Delete the feature branch after merging into the develop branch. Pattern No. 1: Functional branch and working branch In this pattern, all working branches off the functional branch are merged before the integration testing is performed. This pattern is appropriate when the development of a single feature is large and is expected to span multiple sprints. Pattern No. 2: Branch per sprint and working branch In this pattern, you are not limited to performing integration tests after all work branches have been merged, but you can also perform integration tests for a single feature in a sprint once the necessary development has been merged. This pattern is appropriate when the feature to be developed is small in scale and is expected to be completed within one sprint. Pattern No. 3 (Not recommended): Equating the functional branch with the working branch In this pattern, the timing of PR submission and integration test would not be clear, and the frequency of merges into develop would also be high, so it would be very cumbersome to make QA and release plans. We do not recommend such an approach that lacks planning. Instead, it is recommended to properly plan your releases during system development and operation and decide how to cut feature branches accordingly! (3) Release & Deployment Create a release branch from the develop branch. Tag the release branch. (See Tag naming convention below for naming convention.) When deploying to production environment is finished, merge the release branch into the master branch. Delete the release branch after the merge is complete. Release Plan For development that you plan to release into production environment, create a release plan as soon as possible. The operational rules of the feature branches and the timing of merging feature branches into develop are determined according to the release plan. The simplest release plan is to release all features that have been developed in the develop branch, which only requires the creation of a release branch. However, if multiple development teams are developing different features at the same time and plan to release them multiple times, you should create a release branch first and merge the targeted features one by one. For example, if features 1, 2, and 3 are developed simultaneously, but features 1 and 2 are released first and feature 3 is released a few weeks later: Once release branches such as release 1.0 and 2.0 above are created by branching off from the develop branch, the rule is that, in principle, modified source code from the develop branch should never be merged in again. The reason is that if there are multiple release plans, after the release branch is created, another feature may be merged into the develop branch, and if the feature is further merged from the develop branch, the feature will be mistakenly released even though it has not been tested. As shown in the figure below: Also, feature branches are not merged into develop immediately after development is completed. Once merged into develop, it will be included in the next release, so make sure to check the timing of merging feature branches into develop according to your release plan. If a bug is found prior to release Create a bugfix branch by branching off from the release branch. Then fix the bug, submit a PR and merge it into the release branch. Fixed bugs are reflected after release work when the release branch is merged into master and develop. As shown in the figure below: (4) Bug fix in production environment If a bug occurs in the production environment, follow the steps below to fix it. First, create a hotfix branch from the master branch. Tag the hotfix branch when you are done fixing it. (See Tag naming convention below for naming convention.) When deploying to production environment is complete, merge the hotfix branch into the master and develop branches. Delete the hotfix branch after the merge is complete. Maintenance branch The product's version-up policy ensures that it is versioned in units of microservices, and each major version has a certain maintenance period. Therefore, it is necessary to create a maintenance branch for each major version in the GitHub repository of the microservice. For example, the microservice "Automotive" has had three major versions released so far, V.1, 2, and 3, the maintenance branch would look like this: To make minor changes or fix bugs in an old major version, it is advisable to branch from the corresponding maintenance branch, but you can also make an appropriate release plan depending on the scale of development and decide on development and release branches. Branch Commit Rule There are two ways to merge a modified source code into a Git branch: to commit directly to submit a pull request and have a reviewer approve it before merging In principle, it is advisable to opt for the method of making a pull request and then merging. However, you may commit directly to the following branches: 1. Working feature branches for developing new and modified features 2. Bugfix branches for fixing bugs just before release 3. Hotfix branches for post-release bug fixes Tag Naming Convention Development environment 1.1 On GitHub, manually at release time (not recommended)  → Tag the git branch  Naming convention: x.x.x-SNAPSHOT  Example: 1.0.0-SNAPSHOT  → When registering to ECR, the image is automatically tagged according to tags and time.    Image tag name: x.x.x-SNAPSHOT_yyyyMMdd-hhmmss  Example: 1.0.0-SNAPSHOT-20210728-154024 1.2 Use JIRA tickets and automatically at release time (recommended)  → Do not tag the git branch.  → When registering to ECR, the image is automatically tagged according to current branch and time.  Image tag name: Branch name_yyyyMMdd-hhmmss  Example: develop-20210728-154024 Staging & Production Environment Manually tag the release or hotfix branch. Naming convention: release.x.x.x Example: release.1.0.0 Challenges solved by this Git branch strategy Our development team was launched a year ago. At the beginning, we encountered confusion regarding source code management, due to the diverse development experiences and backgrounds of our team members. There was also a "core" team of developers at our headquarters and a "star" team of developers offshore for our project. Although both teams work on the development of different functions, it is inevitable that the same source files are modified at the same time. Thus, the following problems occurred: Source code conflicts occurred when other people's updates were accidentally deleted Features were developed based on old source code Phased releases were not feasible We value teamwork in system development. Rules that are acknowledged and enforced by everyone are essential. This is exactly what Gitflow is all about. Each team member responsible for developing different features can create different feature branches and modify the sources without impacting on each other's work. Also, by keeping the latest source code in the develop branch according to the sprint development cycle, everyone will be able to deploy their own work based on the latest source code at the start of the next development cycle. In addition, by creating a release branch for each "release plan," the developed features can be released gradually, thus reducing the burden on developers and the risk of the project itself! With this Git branch strategy in place, the back-office system development team I previously led was able to overcome the chaos and stably develop and have features released! In the future, as I take on the role of project manager on the app development project, I aim to draw upon my experiences when encountering similar challenges in the future. Why not use it as a reference for your own product development?
アバター
I tried building an AWS Serverless Architecture with Nx and terraform! Hello. I'm Kurihara, from the CCoE team at KINTO Technologies and I’m passionate about creating DevOps experiences that bring joy to developers. As announced at the AWS Summit Tokyo 2023: our DBRE team’s approach to both agility and governance of our vehicle subscription service KINTO , is to deploy a platform that provides temporary jump servers (called “DBRE platform” from now on) across our company, triggered by requests from Slack. This DBRE platform is implemented using a combination of several AWS serverless services. In this article, we will introduce how we improved the developer experience by using a Monorepo tool called Nx and terraform. Our aim is to provide insights that can benefit anyone interested in adopting a Monorepo development approach, irrespective of their focus on serverless architectures. Background and Issues The architecture of our DBRE platform looks as follows: In addition to the above, there are about 20 Lambdas developed via Golang, Step Functions to orchestrate them, DynamoDB and EventBridge for scheduled triggers. The following issues and requests were raised in the development process. Integrate with “terraform plan” and “terraform apply workflows for secure deployment Incorporate appropriate static code analysis such as Formatter, Linter, etc. When considering the development of a serverless architecture, conventional choices like SAM or serverless framework come to mind. However,we decided against it because we wanted to implement IaC with Terraform and because Lambda functions developed in Golang lacked support. Let's look at the Terraform Lambda module. I thought that if I could make a proper Zip of the Lambda code to be referenced in Terraform, I could potentially resolve the issue of wanting to implement IaC with terrafrom. resource "aws_lambda_function" "test_lambda" { # If the file is not in the current working directory you will need to include a # path.module in the filename. filename = "lambda_function_payload.zip" function_name = "lambda_function_name" role = aws_iam_role.iam_for_lambda.arn handler = "index.test" source_code_hash = data.archive_file.lambda.output_base64sha256 runtime = "nodejs16.x" environment { variables = { foo = "bar" } } } Furthermore, consider the latter request to properly incorporate static code analysis. Serverless development is a combination of smaller code bases. In other words, we considered introducing the Monorepo tool with the idea that it would facilitate integration with development tools and keep build scripts simple by clearly defining the boundaries of the codebase group. What is a Monorepo tool? To get straight to the point, we took the decision to use a TypeScript-made Monorepo tool called Nx . We opted for Monorepo.tools primarily due to its extensive coverage of functions, as highlighted on the Monorepo tool comparison site. Additionally, its JavaScript-based architecture appealed to us as we thought it would be beneficial for scalability and accommodate future growth effectively. (Assuming the barriers of entry into the front-end community are low.) Examples will be given in the next chapter, but the premise is: What is Monorepo and what does Nx do? I will now explain briefly. Defining terms Let us take a moment to clarify how we've aligned the terms used in this document to match the conventions of Nx: Project : one repository-like bulk in monorepo (e.g. single Lambda code, common modules) Task : A generic term for the processes required to build an application, such as test, build, deploy, etc. What monorepo is It is described as a single repository where related projects are stored in isolated and well-defined relationships. In contrast, there is the multi-repository configuration often referred to in the Web realm as polyrepo. Source: monorepo.tools In summary, monorepo.tools offers the following advantages Atomic commits on a per-system basis Easy deployment of common modules (when a common module is updated, it can be used immediately without the need to import, etc.) Easier to be aware of the system as a whole, rather than vertically divided (in terms of mindset) Less workload required when setting up a new repository While AWS CDK isn't categorized as a Monorepo tool, it shares a similar philosophy regarding the management of IaC and application code, aligning with the trend observed with Monorepo to consolidate both infrastructure and application code within a single repository. We discovered that failures are often related to "out-of-band" changes to an application that aren't fully tested, such as configuration changes. Therefore, we developed the AWS CDK around a model in which your entire application is defined in code, not only business logic but also infrastructure and configuration. …and fully rolled back if something goes wrong. - https://docs.aws.amazon.com/cdk/v2/guide/best-practices.html What Nx can do Roughly speaking, if you define your own tasks and dependencies for each project , Nx will orchestrate the tasks. The following is an example of defining tasks and dependencies for a terraform project. When defined in this way, the plan-development will first build (compile and zip-compressed) the Lambda code with the defined dependencies, and then run terraform plan . fmt and test can also be defined simply as terraform project-specific tasks. By clarifying the responsibilities of each code base this way, we can improve the overall outlook of the code. It is possible to incorporate development tools suited to each development language on a project-by-project basis, and it is possible to build an appropriate development flow without having to rely on builders. Practical examples at KTC The following is an excerpt from the aforementioned DBRE platform, simplified and illustrated with practical examples. There are two Golang Lambda codes, both using the same common module. The Lambda code project is responsible for compiling its own code and creating a Zip file so that it can be deployed from terraform. The directory structure looks like this. Project Definition Project definitions for each of the above four projects are listed below. ①: Common modules In Golang, common modules only need to be referenced by the user, so builds are not required and only static analysis and UT are defined as tasks. projects/dbre-toolkit/lambda-code/shared-modules/package.json { "name": "shared-modules", "scripts": { "fmt-fix": "gofmt -w -d", "fmt": "gofmt -d .", "test": "go test -v" } } (2), (3): Lambda code By registering a common module as a dependent project, it is defined that if the code of the common module is changed, the task needs to be executed. The build task is responsible for executing the go build and zipping the generated binaries, which will later be used in the terraform project. projects//dbre-toolkit/lambda-code/lambda-code-01/package.json { "name": "lambda-code-01", "scripts": { "fmt-fix": "gofmt -w -d .", "fmt": "gofmt -d .", "test": "go test -v", "build": "cd ../ && GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o lambda-code-01/dist/main lambda-code-01/main.go && cd lambda-code-01/dist && zip lambda-code.zip main" }, "nx": { "implicitDependencies": [ "shared-modules" ] } } ④: IaC When plan-${env} or apply-${env} is executed, the build of the Lambda code specified in the dependency is executed first (the necessary zip is generated when plan or apply is executed) projects//dbre-toolkit/iac/package.json { "name": "iac", "scripts": { "fmt": "terraform fmt -check -diff -recursive $INIT_CWD", "fmt-fix": "terraform fmt -recursive -recursive $INIT_CWD", "test": "terraform validate", "plan-development": "cd development && terraform init && terraform plan", "apply-development": "cd development && terraform init && terraform apply -auto-approve" }, "nx": { "implicitDependencies": [ "lambda-code-01", "lambda-code-02" ], "targets": { "plan-development": { "dependsOn": [ "^build" ] }, "apply-development": { "dependsOn": [ "^build" ] } } } } From the terraform module, refer to the Zip file generated in the previous step as follows. local { lambda_code_01_zip_path = "${path.module}/../../../lambda-code/lambda-code-01/dist/lambda-code.zip" } # Redacted resource "aws_lambda_function" "lambda-code-01" { function_name = "lambda-code-01" architectures = ["x86_64"] runtime = "go1.x" package_type = "Zip" filename = local.lambda_code_01_zip_path handler = "main" source_code_hash = filebase64sha256(local.lambda_code_01_zip_path) } Task Execution Now that each project has been divided and tasks defined, we will look at task execution. In Nx, the run-many subcommand can be used to execute specific tasks for a specific project or for all projects. Based on dependencies, they are executed in parallel when possible, which also speeds up the process. nx run-many --target=<defined task name> --projects=<project name comma separated>. nx run-many --target= --all Example of executing plan-development for an iac project. Tasks with dependencies will execute tasks based on the defined dependencies. This is exactly the point I wanted to make. It will execute the tasks of the dependent project ahead of time, thus ensuring that the Lambda code is properly zipped when terraform is executed. $ nx run-many --target=plan-development --projects=iac --verbose > NX Running target plan-development for 1 project(s) and 2 task(s) they depend on: - iac —————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— > nx run lambda-code-01:build updating: main (deflated 56%) > nx run lambda-code-02:build updating: main (deflated 57%) > nx run iac:plan-development Initializing modules... Initializing the backend... Initializing provider plugins... - Reusing previous version of hashicorp/aws from the dependency lock file - Using previously-installed hashicorp/aws v4.39.0 terraform has been successfully initialized! --redacted } Plan: 0 to add, 2 to change, 0 to destroy. Example of executing the test task for all projects. No task dependencies, so everything runs in parallel Tasks with no dependencies, such as UT, can be executed in parallel. This allows for CI execution, as well as for development rules such as "Always run UT before pushing to GitHub" to be resolved with a single command. $ nx run-many --target=test --all --verbose > NX Running target test for 4 project(s): - lambda-code-01 - lambda-code-02 - shared-modules - iac —————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— > nx run shared-modules:test ? github.com/kinto-dev/dbre-platform/dbre-toolkit/shared-modules [no test files] > nx run lambda-code-01:test === RUN Test01 --- PASS: Test01 (0.00s) PASS ok github.com/kinto-dev/dbre-platform/dbre-toolkit/lambda-code-01 0.255s > nx run iac:test Success! The configuration is valid. > nx run lambda-code-02:test === RUN Test01 --- PASS: Test01 (0.00s) PASS ok github.com/kinto-dev/dbre-platform/dbre-toolkit/lambda-code-02 0.443s —————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— > NX Successfully ran target test for 4 projects > nx run lambda-code-02:test Powerful features of Nx and Monorepo tools We hope you can see how tasks can be orchestrated by properly defining the project. However, this alone is no different from a regular task runner, so here are some of the major advantages of using the Nx and Monorepo tools. Execute tasks only for changed projects The fastest task execution is to not execute the task in the first place. A mechanism called the affected command, which performs tasks only for the changed project, is available for fast completion of CI. The following is the command syntax By passing two Git pointers, it will only execute tasks in the project that have changed between the two pointers. nx affected --target=<task name> --base=<two dots diff of base> --head=<two dots diff of head> # State with changes only in lambda-code-01 $ git diff main..feature/111 --name-only projects/dbre-toolkit/lambda-code/lambda-code-01/main.go $ nx affected --target=build --base=main --head=feature/111 --verbose > NX Running target build for 1 project(s): - lambda-code-01 ———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— > nx run lambda-code-01:build updating: main (deflated 57%) ———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— > NX Successfully ran target build for 1 projects If there is a change in the project on which it depends on, it will execute tasks based on the dependencies. # State with changes only in shared-module $ git diff main..feature/222 --name-only projects/dbre-toolkit/lambda-code/shared-modules/utility.go # Tasks in projects that depend on shared-module are executed $ nx affected --target=build --base=main --head=feature/222 --verbose > NX Running target build for 2 project(s): - lambda-code-01 - lambda-code-02 ———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— > nx run lambda-code-01:build updating: main (deflated 56%) > nx run lambda-code-02:build updating: main (deflated 57%) ———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— > NX Simplifying the CI/CD pipeline If task names do not change, the CI/CD pipeline does not need to be changed as projects are added, thus lowering maintenance costs. In addition, the affected command described above can speed up the CI/CD process (since it only executes tasks for the changed project). Below is an example of CI for GitHub Actions. name: Continuous Integration on: pull_request: branches: - main - develop types: [opened, reopened, synchronize] jobs: ci: runs-on: ubuntu-latest steps: -uses: actions/checkout@v3 with: fetch-depth: 0 # --immutable option to have the fixed version of dependencies listed in yarn.lock installed - name: install npm dependencies run: yarn install --immutable shell: bash - uses: actions/setup-go@v3 with: go-version: '^1.13.1' - uses: hashicorp/setup-terraform@v2 with: terraform_version: 1.3.5 - name: configure AWS credentials uses: aws-actions/configure-aws-credentials@v1-node16 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: 'ap-northeast-1' # Task execution part is completed with this amount of description - name: format check run: nx affected --verbose --target fmt --base=remotes/origin/${{ github.base_ref }} --head=remotes/origin/${{ github.head_ref }} - name: test run: nx affected --verbose --target test --base=remotes/origin/${{ github.base_ref }} --head=remotes/origin/${{ github.head_ref }} - name: build run: nx affected --verbose --target build --base=remotes/origin/${{ github.base_ref }} --head=remotes/origin/${{ github.head_ref }} - name: terraform plan to development run: nx affected --verbose --target plan-development --base=remotes/origin/${{ github.base_ref }} --head=remotes/origin/${{ github.head_ref }} Combine with Git Hook for even greater productivity I'd like to see at least static analysis and Unit Test done locally before pushing with Git. Development rules such as 'Git history is dirty too' can be easily solved. By combining the --files and --uncommitted options of the affected command with the Git Hook, only the project to which the changed files belong can be targeted , minimizing the developer's stress (and time spent on execution). For example, the following affected command can be included in the pre-commit hook to keep the commit history clean and reduce review noise. nx affected --target lint --files $(git diff --cached --name-only) : nx affected --target unit-test --files $(git diff --cached --name-only) nx affected --target fmt-fix --files $(git diff --cached --name-only) Other Benefits Task execution results are cached if the project code has not changed The results of task execution are cached, both in the generated files and standard output/errors. (For more information, click here .) $ tree .nx-cache/ ├── ce36b7825abacc0613a8b2c606c65db6def0e5ca9c158d5c2389d0098bf646a1 │   ├── code │   ├── outputs │   │   └── projects │   │   └── dbre-toolkit │   │   └── lambda-code │   │   └── lambda-code-01 │   │   └── dist │   │   ├── lambda-code.zip │   │   └── main │   └── terminalOutput ├── ce36b7825abacc0613a8b2c606c65db6def0e5ca9c158d5c2389d0098bf646a1.commit ├── nxdeps.json ├── run.json └── terminalOutputs ├── 1c9b46c773287538b1590619bfa5c9abf0ff558060917a184ea7291c6f1b988c ├── 6f2fbb5f2dd138ec5e7e261995be0d7cddd78e7a81da2df9a9fe97ee3c8411c5 ├── 88c7015641fa6e52e0d220f0fdf83a31ece942b698c68c4455fa5dac0a6fd168 ├── 9dc8ebe6cdd70d8b5d1b583fbc6b659131cda53ae2025f85037a3ca0476d35b8 ├── c4267c4148dc583682e4907a7692c2beb310ebd2bf9f722293090992f7e0e793 ├── ce36b7825abacc0613a8b2c606c65db6def0e5ca9c158d5c2389d0098bf646a1 ├── db7e612621795ef228c40df56401ddca2eda1db3d53348e25fe9d3fe90e3e9a1 ├── dc112e352c958115cb37eb86a4b8b9400b64606b05278fe7e823bc20e82b4610 └── eb94fd3a7329ab28692a2ae54a868dccae1b4730e4c15858e9deb0e2232b02f3 If this caching mechanism is also integrated into our CI/CD pipeline, it optimizes processing tasks during code reviews. For instance, when only a portion of the code requires modification, the cache can expedite most CI processes for the updated push, thereby enhancing development efficiency. - name: set nx cache dir to environment variables id: set-nx-version run: | echo "NX_CACHE_DIRECTORY=$(pwd)/.nx-cache" >> $GITHUB_ENV shell: bash # Register nx cache to GitHub cache - name: nx cache action uses: actions/cache@v3 id: nx-cache with: path: ${{ env.NX_CACHE_DIRECTORY }} key: nx-cache-${{ runner.os }}-${{ github.sha }} restore-keys: | nx-cache-${{ runner.os }}- The graph command allows visualization of project dependencies Even though the boundaries of the code base have been clarified, there are still times when you want to check dependencies comprehensively. A graph subcommand is maintained to visualize dependencies between projects. One of the benefits of Nx is its ability to handle such tasks. Current status of DBRE platform The DBRE platform currently has 28 projects with Monorepo. In the above example, the number of projects was small, so it may have been difficult to understand their benefits, but with a scale of this extent, the benefits of the affected commands come through like shining stars. $ yarn workspaces list --json {"location":".","name":"dbre-platform"} {"location":"dbre-utils","name":"dbre-utils"} {"location":"projects/DBREInit/iac","name":"dbre-init-iac"} {"location":"projects/DBREInit/lambda-code/common","name":"dbre-init-lambda-code-common"} {"location":"projects/DBREInit/lambda-code/common-v2","name":"dbre-init-lambda-code-common-v2"} {"location":"projects/DBREInit/lambda-code/push-output","name":"dbre-init-lambda-code-push-output"} {"location":"projects/DBREInit/lambda-code/s3-put","name":"dbre-init-lambda-code-s3-put"} {"location":"projects/DBREInit/lambda-code/sf-check","name":"dbre-init-lambda-code-sf-check"} {"location":"projects/DBREInit/lambda-code/sf-collect","name":"dbre-init-lambda-code-sf-collect"} {"location":"projects/DBREInit/lambda-code/sf-notify","name":"dbre-init-lambda-code-sf-notify"} {"location":"projects/DBREInit/lambda-code/sf-setup","name":"dbre-init-lambda-code-sf-setup"} {"location":"projects/DBREInit/lambda-code/sf-terminate","name":"dbre-init-lambda-code-sf-terminate"} {"location":"projects/PowerPole/iac","name":"powerpole-iac"} {"location":"projects/PowerPole/lambda-code/pp","name":"powerpole-lambda-code-pp"} {"location":"projects/PowerPole/lambda-code/pp-approve","name":"powerpole-lambda-code-pp-approve"} {"location":"projects/PowerPole/lambda-code/pp-request","name":"powerpole-lambda-code-pp-request"} {"location":"projects/PowerPole/lambda-code/sf-deploy","name":"powerpole-lambda-code-sf-deploy"} {"location":"projects/PowerPole/lambda-code/sf-notify","name":"powerpole-lambda-code-sf-notify"} {"location":"projects/PowerPole/lambda-code/sf-setup","name":"powerpole-lambda-code-sf-setup"} {"location":"projects/PowerPole/lambda-code/sf-terminate","name":"powerpole-lambda-code-sf-terminate"} {"location":"projects/PowerPoleChecker/iac","name":"powerpolechecker-iac"} {"location":"projects/PowerPoleChecker/lambda-code/left-instances","name":"powerpolechecker-lambda-code-left-instances"} {"location":"projects/PowerPoleChecker/lambda-code/sli-notifier","name":"powerpolechecker-lambda-code-sli-notifier"} {"location":"projects/dbre-toolkit/docker-image/shenron-wrapper","name":"dbre-toolkit-docker-image-shenron-wrapper"} {"location":"projects/dbre-toolkit/iac","name":"dbre-toolkit-iac"} {"location":"projects/dbre-toolkit/lambda-code/dt-list-dbcluster","name":"dbre-toolkit-lambda-code-dt-list-dbcluster"} {"location":"projects/dbre-toolkit/lambda-code/dt-make-markdown","name":"dbre-toolkit-lambda-code-dt-make-markdown"} {"location":"projects/dbre-toolkit/lambda-code/utility","name":"dbre-toolkit-lambda-code-utility"} IaC in terraform is also divided into four projects in component units. This ability to easily split up projects allows each code base to remain slim in size, even in a single repository. The affected command also allows CI/CD to be completed faster, increasing productivity without reducing the development experience. $ yarn list-projects | grep iac {"location":"projects/DBREInit/iac","name":"dbre-init-iac"} {"location":"projects/PowerPole/iac","name":"powerpole-iac"} {"location":"projects/PowerPoleChecker/iac","name":"powerpolechecker-iac"} {"location":"projects/dbre-toolkit/iac","name":"dbre-toolkit-iac"} Issues We will also present the challenges we faced in completing this development architecture and how we solved them. As mentioned in the introduction, zipping the Lambda code was an important point, but unless the execution environment and zip metadata (update date, etc.) were completely the same, differences would be detected in terraform even if the code was unchanged. The solution was to build and zip the code in the container and call it from the task definition. Dockerfile FROM golang:1.20-alpine RUN apk update && \ apk fetch zip && \ apk --no-cache add --allow-untrusted zip-3.0-r*.apk bash COPY ./docker-files/go-single-module-build.sh /opt/app/go-single-module-build.sh ./docker-files/go-single-module-build.sh #!/bin/bash set -eu -o pipefail while getopts "d:m:b:h" OPT; do case $OPT in d) SOURCE_ROOT_RELATIVE_PATH="$OPTARG" ;; m) MAIN_GO="$OPTARG" ;; b) BINARY_NAME="$OPTARG" ;; h) help ;; *) exit ;; esac done shift $((OPTIND - 1)) cd "/opt/mounted/$SOURCE_ROOT_RELATIVE_PATH" || exit 1 rm -f ./dist/* CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o "./dist/$BINARY_NAME" "$MAIN_GO" cd ./dist || exit 1 # for sha256 diff chown "$HOST_USER_ID":"$HOST_GROUP_ID" "$BINARY_NAME" touch --no-create -t 01010000 "$BINARY_NAME" ./*.tmpl zip "$BINARY_NAME.zip" "$BINARY_NAME" ./*.tmpl chown -R "$HOST_USER_ID":"$HOST_GROUP_ID" ../dist There are other issues as well, such as the current lack of local execution. I would like to try to make not only terraform but also SAM and cdk into Monorepo in the future. Summary In this article, we introduced the powerful features of Nx based on the introduction of how to manage AWS serverless using the Monorepo tool. If this sounds like something you would like to do, would you like to consider working with us at the Platform Group? Thank you for reading.
アバター
記念となる1組目のご依頼 学びの道の駅 運営事務局のHOKAです。 2024年2月、KINTOテクノロジーズのメンバー全員が参加する月次MTGで、「 学びの道の駅、はじめます! 」という案内をしましたところ、モバイルアプリ開発グループiOSチームの中口さんから「勉強会について相談したいです」というご相談をいただきました。 モバイルアプリ開発Gの勉強会に関するご相談 私たちにとっても初めてのお問い合わせ。 iOSチームのチームリーダー4人と事務局メンバー3人で早速MTGしてみたところ、 iOSチームの底上げを目的 に、2023年6月から週一回開催しており、その月の第一週で何をやりたいかみんなで決めて、第二~四週でそれを実施するという流れ。ファシリも当番制。雑談、LT、輪読など、HIGの発表をしたこともあったとか。 HOKAとしては、 「しっかり運営されている」「悩む必要ある?」 という印象でした。 これ、KINTOテクノロジーズの従業員あるあるです。 この相談会をきっかけに、事務局の3人は「ぜひ勉強会、見学してみてください♪」とお招きいただきました。 突撃!となりの勉強会 自己紹介と雑談会 ということで、私たちのやりたい「突撃!となりの勉強会」してみました。 時は2024年3月12日。オンライン+MTGにiOSチームが集まって勉強会は始まりました。 この日は新しく参加されたメンバーがいるので、みんなの自己紹介を兼ねた「雑談会」がテーマ。最初に自己紹介を18人×1分=トータル20分くらいで実施。 名前と担当プロダクトと近況を報告。1分ですがSlack実況中継も活用して、初参加の私たちもメンバーのお人柄が理解できる効率よい自己紹介タイムでした。 そして、ちゃっかり道の駅事務局の私たちも自己紹介させていただきました。 後半は雑談がスタート。 「昨日、室町に来ていた粟田さんからSlackからデプロイしているが限界が来ている。サインインしないで連動したモバイルアプリできれば」という話が合ったとのこと。 すると、メンバーの一人が 「業務外で作ってみませんか?モバイルアプリ開発グループは、プロデューサーもバックエンドもいるし、興味がある人はぜひSlackチャンネル作ったので、議論していきましょ。」 と提案。 おお~! そこに、アシスタントマネージャーの日野森さんが 「学びの道の駅アプリを作っていくのも良いんじゃない?社内向けのアプリ作っていくの、良いよね。NFT入れてKTCトークンとか。」 矢島さん 「勉強会に参加するとポイント溜まるとか?」 日野森さん 「年末にポイント溜まった人はなんかもらえるとか?そんな風に面白そうだけど、外部に出すのはまだっていう案件をやって行くと良さそう。」 仲野さん 「社内の者が社内の開発していくと良いのかも!」 まさかの学びの道の駅にプラスの流れ!!!嬉しいです。 「ソースコード書く以外の学びがありそうだよね。」と雑談の中から、エンジニアとして成長するヒントとなるコメントが飛び交っていました。 やはりこの勉強会、すごく良いのでは? さらに雑談会は発展して、4月の勉強会は3月末に開催予定の「try! Swift Tokyo」のイベントについての話題で盛り上がりました。 翌週までの宿題を持ち帰り、iOSエンジニアは自分の道に帰って行くのでした。
アバター
Introduction Hello, I am Takaba, a Product Manager in the Global Development Group at KINTO Technologies. In this article, I will share my tips on effective communication while talking with various stakeholders as a Product Manager. Having worked on products for many years, I've seen how communication influences the atmosphere and success of a project. Here are some of the things I have experienced and still use and practice every day. The Pyramid Style As a Product Manager, I talk to people often. To ensure clarity so others can understand, I communicate using a certain method. That method is called the Pyramid Principle for Logical Speaking . There are three reasons to use this method. First, I feel like I needed some kind of method because I am not very good at speaking in public. For example, when I do a presentation or take part in a discussion, I use this method so that I don't fall into a loop where I worry about whether the listeners understand what I am saying, then become worse at conveying what I want to. Secondly, the job of a Product Manager involves talking to various people, and they will inevitably give a lot of different opinions, and it will be difficult to organize them. However, I find that using this method helps things run more smoothly. For example, a Product Manager talks with many stakeholders about products in an organized manner, but stakeholders have different opinions, and sometimes is hard to discern among the many options available to one. By using this method at times like that, I can solve it relatively smoothly. Third, you have to speak logically in order to accurately communicate information to others. Speaking logically lets the listener understand better. For example, when I explain something to someone, speaking logically usually makes it easier for the listener to understand. By using a logical approach to communicate the things you want to say, you can convey it in a friendly way that is easier for others to understand. The way you communicate things is very important because doing it in a friendly approach can affect the atmosphere and success of a project. What I've just explained was structured with the logical speaking approach of the pyramid principle in mind. This method is described in the book "Speak in One Minute" [^1] by Yoichi Ito, who taught me in person during a seminar conducted at an IT company where I used to work. I used part of the above to explain the pyramid-style logical way of speaking, which I needed to use when I first became a Product Manager. I will now explain the pyramid-style speaking method. As you can see at the top of this pyramid, the first thing you need to start is with the conclusion. That means saying first what you want to convey the most. The next step is the reasoning. State the reasons supporting the conclusion. Relying on only one reason is weak. You should aim for at least three. The third step is giving examples. The more specific examples you provide, the more likely you are to convince the listener. This part fleshes out your conclusion and aids in their understanding. Be specific and make it easy to imagine. I will use an example to explain. For this example, the conclusion is, "There should be a regular product meeting once a week." It looks like this with the pyramid style. Pyramid-style logical speaking is basic, and there are lots of times when people should use it in business, but how many people actually use it in their day-to-day work? I don't think that many people do. I think a lot of people assume that just because they understand a concept, the person listening can also understand it, so they tend to cut sentences short, skipping over a lot of key words. Like myself, there are many who cut conversations short, because discussing things in depth can be bothersome. Training is necessary, because speaking logically requires skills and practice. If you use it every day, you will do it more accurately, and you will communicate in a way that is easy for listeners to understand. Applying the Pyramid Principle also Involves Hypothetical Thinking Many logical thinking textbooks may say the opposite. Starting with a conclusion and then reasoning backwards as in the pyramid style can lead to a form of 'self-centered logic’. However, the author says, "In today's fast-paced world, prioritizing speed, even if it means adopting a somewhat self-centered approach, is acceptable if it helps to formulate thoughts quickly." Even if your explanations aren’t fully complete, engaging stakeholders with your reasoning will put you closer in the process of reaching a conclusion. For example, in a project, discussing with the entire project team at an early stage, polishing ideas and ensuring that everyone is on the same page will speed up the success of the project. When sharing and discussing, every team member will bring different ideas to the table. I think that by putting together these many opinions, we can come to a better objective conclusion for everyone. I feel that this kind of hypothetical thinking (thinking from the conclusion above) is efficient, speed-oriented, and enabling you to think objectively is a great skill to have. Using the Pyramid to Improve Listening Skills I have talked about the pyramid style from the speaker's perspective, but this method can also be used to train active listening skills too. The other day, a colleague at work suggested that I should improve my ability to understand what is being said. That was when the idea struck me to use the pyramid method for understanding. I create a box in my head corresponding to the "conclusion + reasoning" of the pyramid, and as I listen, I segment the information within this box. When I listen to the story while segmenting, it becomes easier to understand what is the core of the story and what is missing there. Source: Yoichi Ito, "Speak in One Minute", SB Creative, 2018 [^1] This is not something that can be mastered immediately through training. We can improve listening and understanding skills by practicing this method in daily conversations as well. Therefore, I incorporate this method into both speaking and listening training every day. Conclusion Today, I talked about a communication method that I use in my day-to-day work. Are there any methods that you use consciously every day? If you are interested in the pyramid method of speaking, I would recommend you to give it a try. I believe that communication skills are very important to become a better Product Manager, so I will work hard with everyone in the future. References [^1]: Yoichi Ito (Author), "Speak in One Minute", SB Creative, 2018
アバター
Hello (or good evening), this is the part 6 of our irregular Svelte series. To read previous articles, you can click on the links below: Insights from using SvelteKit + Svelte for a year Comparison of Svelte and other JS frameworks - Irregular Svelte series-01 Svelte unit test - Irregular Svelte series 02 Using Storybook with Svelte - Irregular Svelte Series 03 Exploring Svelte in Astro - Irregular Svelte series 04 SvelteTips - Irregular Svelte series 05 In this article, I will be writing about SvelteKit SSR deployments. You can get the module here. @sveltejs/adapter-node Deployment of SSR requires an adapter. This time, I will use the Node adapter. https://www.npmjs.com/package/@sveltejs/adapter-node This adapter is also listed on the official GitHub. https://github.com/sveltejs/kit Express This is a web framework for Node.js. Others include Fastify, but you are free to use any of them. https://www.npmjs.com/package/express Environment Settings First, configure the settings in Svelte. SvelteKit is SSR by default, so there is no need to set anything special there. On the other hand, you need to use an adapter to build when deploying. As described on the official site , from the Svelte project, install with yarn add -D @sveltejs/adapter-node and add the following code to svelte.config.js . import adapter from '@sveltejs/adapter-node'; const config = { kit: { adapter: adapter() } }; export default config; After building your project with yarn build , the resulting files will be placed in the default output location, /build , and the files, index.js and handler.js will be created. If you want to use the server as is with the built file, you can execute node build to run build/index.js to start the server and check it works. (The default is build because xxxx in node xxxx is the output location for the built files.) Next, put the Express configuration file in the root directory. (Please install express in advance.) import { handler } from './build/handler.js'; import express from 'express'; const app = express(); // For example, create a health check pass for AWS that is not related to the SvelteKit app created app.get('/health', (_, res) => { var param = { value: 'success' }; res.header('Content-Type', 'application/json; charset=utf-8'); res.send(param); }); // Svelte created with build is handled here app.use(handler); app.listen(3000, () => { console.log('listening on port 3000'); }); After completing the above settings, you can start the Express server with node server.js and check the SvelteKit app at http://localhost:3000 . Deploy the App to AWS From here on, we will deploy it to AWS. In AWS, deployments can be configured in various ways depending on the requirements. In this article, I will show you how to access the app from the Internet using only EC2. For security and performance reasons, please consider combinations such as CloudFront, ALB, and VPC in practice. As the AWS services incur charges, it is advisable to monitor costs and stop the unused service. EC2 This is a cloud server service to host the SvelteKit app I have created. https://aws.amazon.com/jp/ec2/ Create an EC2 Instance First, I would like to start from setting up EC2. To create an EC2 instance, go to the EC2 Dashboard and click "Launch Instance" in the upper right corner. Then you will be redirected to the screen shown above. Configure the following items and click "Launch Instance." Name: Choose a name to identify your instance easily. OS images: You can adjust according to your preferences, but for this article, I will be using Amazon Linux, with subsequent commands based on the same. Instant type: t2.micro (otherwise, charges apply) Key pair: In this article, I will access EC2 with an SSH client, so set it up. Network settings: I will enable HTTP access to allow connection via an SSH client and for basic web accessibility checks. Connect to the EC2 Instance After creating an EC2, you will be returned to the list screen, where you should see the newly created instance. Next, proceed to connect to the instance and configure any remaining settings. Choose Instances from the list screen and click the "Connect" button. Then, you can choose from four connection methods, but this time, I will use SSH. I assume that you have already created and downloaded a key pair on the instance launch screen earlier. Use that key as instructed on the screen to connect. Once the connection is made successfully, install various things. Setting Up Node.js I am going to install Node.js first. You can install different Node.js versions using the Node Version Manager (NVM), which offers convenient switching between them. https://github.com/nvm-sh/nvm curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash After installation, you will get a message from the terminal. You need to pass the nvm command so that it can be executed. Copy the following code, paste it into the command line, and execute it. export NVM_DIR="$HOME/.nvm" [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" // Check if the nvm command works nvm --version -> 0.34.0 And install Node.js with nvm // Install Node.js 18 nvm install 18 // Check the installed node and npm version node -v -> 18.16.1 npm -v -> 9.5.1 // Install yarn (skip this if you want to use npm) npm install -g yarn yarn -v -> 1.22.19 The Node.js setup is now complete. SvelteKit App Placement Next, I would like to put the app I created in an instance. There is also a way to move it from your local to EC2, but this time, I will clone it from the repository on GitHub. First, install GitHub CLI so that it can be cloned. Installation instructions for Linux can also be found in official documentation . // commands listed in the official documentation (Be sure to check the official documentation as it may change.) type -p yum-config-manager >/dev/null || sudo yum install yum-utils sudo yum-config-manager --add-repo https://cli.github.com/packages/rpm/gh-cli.repo sudo yum install gh // Version check gh --version -> 2.31.0 Next, log in with your account and clone the repository. // Login to GitHub gh auth login // Put the url of the repository you want to clone gh repo clone https://github.com/xxxxxx/yyyyyyy Now the app has been successfully cloned to the instance. Setting Up Nginx The next step is to install the Nginx server and modify the config file. // Install sudo yum install nginx // Go to the nginx folder cd /etc/nginx // Open the nginx config file with vim sudo vim nginx.conf In the config file, there is a section called server . Set the proxy path as follows. This syntax instructs nginx to access the SvelteKit server launched in EC2 when accessing / . server { location / { proxy_pass http://localhost:3000; } } Launch the Node Server and Access it from the Web Finally, build and start the node server, just as you did for the local check. yarn install yarn build node server.js Then, try accessing it from the DNS name provided by the EC2 instance you created. (Information can be found on the EC2 listing page.) You should now see something like this! However, if you stop the connection with the EC2 instance, the Node server will also be stopped. So, we use a library called pm2 to persist the Node server. https://pm2.io/docs/runtime/guide/installation/ yarn global add pm2 pm2 -v -> 5.3.0 pm2 start server.js // Check the status of the node server currently running with pm2 and the server id you want to stop. pm2 status // Stop the node server currently running with pm2 pm2 stop [id] Now, even if you disconnect from EC2, you can still browse from the web! This is all on how to deploy a SvelteKit SSR app to AWS.
アバター
はじめに こんにちは、11月入社のnamです! 本記事では2024年2,3月入社のみなさまに、入社直後の感想をお伺いし、まとめてみました。 KINTOテクノロジーズに興味のある方、そして、今回参加下さったメンバーへの振り返りとして有益なコンテンツになればいいなと思います! J.O ![alt text](/assets/blog/authors/nam/newcomers/icon-jo.jpg =250x) 自己紹介 3月入社のJ.Oです。KINTO ONE開発部 新車サブスク開発Gでプロデューサー職として所属しております。前職では事業会社でtoC向け自社サービスのWeb/アプリの企画や運営、ビジネスサイドの開発要件定義まとめなどをしておりました。 所属チームの体制は? 新車サブスクGとしてはバックエンド、フロントエンド、コンテンツ開発、ツール開発など様々なチームがあり、協力会社さん合わせて40名以上の、社内でも最大規模の部署です。 KTCへ入社したときの第一印象?ギャップはあった? グループ会社との関係性や立ち位置、モビリティ業界で新しいプラットフォームを提供するために求められている役割など、入社前に想像していた以上にKTCへの期待度が高いと感じました。 現場の雰囲気はどんな感じ? エンジニア中心の会社なので大人しい雰囲気かと思いますが、結構和気あいあいとしてます。Slackのチャットや絵文字なども賑やかです。自動車に関連する会社なので、デスクの上に車の模型などを飾っている方も多いです。 ブログを書くことになってどう思った? 前職でもサイト内のコンテンツ作成をする機会はありましたが、自分自身のことを発信するのは初めてなので緊張してます。 S.Aさんからの質問 「KTCに入社して驚いたことや感動したことを教えて下さい。」 勉強会などのイベントの多さです。2週に1回以上は何かしらのイベントが行われていて、新しい情報を受信/発信する姿勢に驚きました。 nam ![alt text](/assets/blog/authors/nam/newcomers/icon-nam.JPG =250x) 自己紹介 2月にKTCに入社しました。namです。前職は制作会社でフロントエンドエンジニアをしていました。 所属チームの体制は? 小規模なチームで、皆さんそれぞれ自分の担当が明確に分かれている印象でした。 KTCへ入社したときの第一印象?ギャップはあった? オリエンがとても手厚かったです。全員で同じ方向を向いて進むぞ、という強いメッセージを感じました。 現場の雰囲気はどんな感じ? 隣近所に同じ案件のメンバーが固まっているので、相談しつつ作業しつつ、自由に働いている印象です。 大きなオフィスで働くのが初めてだったので、「広い空間でキーボードの音だけ響いている」みたいな想像をしていたのですが、そんなことはなく安心しました。 ブログを書くことになってどう思った? 入社前からテックブログを拝見していたので、ついに書く側に回って緊張しています。 J.Oさんからの質問 「フロントエンドエンジニアから見て、作りがすごい!と感じるWebサイトを教えてください。」 元々デザインを少しやっていたので、デザインと技術が調和しているサイトは本当にすごい、と感じます。「作りがすごい」サイトは、「作り方がすごい」のだろうと思っています。 企画段階からどれほどの話し合いをして、どうやってエンジニアとデザイナーがコミュニケーションとっていて、お互いの領域を理解し合っているのか、想像もできないようなサイトがたまにありますね。 そんなデザイン的にも技術的に優れていて、調和の取れたサイトを見ると「すごい、強い、最高!」って思います。 KunoTakaC ![alt text](/assets/blog/authors/nam/newcomers/icon-kuno-takac.jpg =250x) 自己紹介 KTC管理部の久野です。労務システム全般(SmartHR、Recoru、ラクロー、カオナビetc)担当です。前職は工場付きのSE、前々職は便利屋(一応、中小企業のインフラがメイン)やってました。 2023年に身体障碍4級(下肢麻痺)認定されてますが、特段気を付けてもらうことはないですね。たいてい杖を持っているので見分けやすくて良いと思いますが、杖を持っていないときに見分けがつかなくなるので、顔も覚えてくださいね! 所属チームの体制は? 管理部としては11人、そのうちでKTC管理部は2人です。さらに名古屋となると……1人!仲良くやっていますので安心してください。 KTCへ入社したときの第一印象?ギャップはあった? IT企業ということで、管理部といえどなにかしらのシステム経由で話をされるのかなと思っていましたが、意外と会議室を使って面着していたのは少し驚きました。 現場の雰囲気はどんな感じ? 基本静かですが、コミュニケーションをとりやすい雰囲気はありますし、ちょくちょくお話しします。管理部は自由席なので、話しかけたい人の近くに席を決めることができて便利です。 ブログを書くことになってどう思った? slackの#腰ケアチャンネル を広めるのに役立ちそうかな、と感じました。労務システム以外の仕事もあるので、ちょっとだけ大変ですが。 namさんからの質問 「入社して3ヶ月ですが、前のお仕事と違う点や、KTCならではの気づきとかありましたか?」 一言で言うと静かですね。前職ではジェット機のようなサーバー室の空調音、地震と見紛う工作機の振動、ドラムの如きドットインパクトプリンタ/電子プリンタの駆動音と三色パトライトの警告音、アクセントにSystemWalkerと電話の音で、毎日がライブハウスでした。 前職はオンプレ環境しかなく、SaaSに初めて触れました。作業によってオンプレがいいな~と思ったり、SaaSいいね!と思ったり、一長一短であることに気づいたというのはあります。 M ![alt text](/assets/blog/authors/nam/newcomers/icon-m.jpg =250x) 自己紹介 前職ではなかなか経験するのが難しかった自社プロダクト開発に挑戦したくて、新たな環境に飛び込みました。 所属チームの体制は? 販売店で行われているクルマ提案業務の効率化・高度化を支えるプロダクトを開発しているチームです。テックリード、フロントエンドエンジニア、バックエンドエンジニアが所属しています。 KTCへ入社したときの第一印象?ギャップはあった? 入社前から「大人なスタートアップ」という印象をもっていたので、初日から自律・自走を求められるかと思っていましたが、ハンズオンから社長との対話会まで、オンボーディングが丁寧で時間をかけていたのが少し意外でした。おかげで、はじめて触れるドメイン知識や幹部の人となりを早く知ることができました。 現場の雰囲気はどんな感じ? 私の所属する開発チームでは、複数のプロダクト開発が並走しているので、お互いのやっていることがわかるように、毎日の朝会などで各メンバーがどの開発のどんなタスクに注力しているか情報共有したり、出社時には気軽に話しかけたりするなど、コミュニケーションをよくとっていると思います。 ブログを書くことになってどう思った? これまでブログなどで情報発信する機会はなかったので、新鮮な気持ちです。 KunoTakaCさんからの質問 「一番気に入っている整頓グッズは何ですか?実用性の高いものをお願いします!」 机の上や床に散らばりがちな、スマホやPCの充電ケーブルの整理整頓にお困りの方におすすめなのが、「 cheero CLIP 万能クリップ 」です!マグネットがついていてつけ外しが簡単なので、散らばっているケーブルを見つけたら、すぐ縛るのが吉です。しかも、ハリガネのように自由に変形させて形をキープできるので、スマホを立てかけて動画をながら見、なんてことにも使えますよ! R.S ![alt text](/assets/blog/authors/nam/newcomers/icon-rs.jpg =250x) 自己紹介 KINTO ONE開発部新車サブスク開発GのR.Sです。KINTO ONEのフロントエンドを担当しています。 所属チームの体制は? 6人体制です。 KTCへ入社したときの第一印象?ギャップはあった? 働き方の自由度が高く、共働きで育児しているとフルフレックスの働き方にとても助けられています。 現場の雰囲気はどんな感じ? 週一のプランニングで個人のやるべきことを明確にして粛々と作業を進めてますね。 ブログを書くことになってどう思った? こんなに早く書くことになるとは思わなかったですが、1度執筆したことで自社ブログを意識するようになりましたね。 Mさんからの質問 「新しいことに挑戦する時には、どのようにキャッチアップされていますか?ラーニングのコツを教えてください!」 気になったら一歩踏み出してみてます。性格的にも広く浅くなところがあるのでとりあえずチャレンジですかね?w 昔経験した全く異なることが「点と点が繋がって線になる」的なこともあるので、その瞬間がとても好きですね。 Hanawa ![alt text](/assets/blog/authors/nam/newcomers/icon-hanawa.jpg =250x) 自己紹介 KINTO ONE開発部新車サブスク開発G所属 フロントエンドエンジニアのHanawaです。前職でもフロントエンドをメインにエンジニアとして働いておりました。今まで培った知識や経験を業務に活かしつつ、領域を問わず技術力を高めていきたいです。 所属チームの体制は? 6人体制のフロントエンドチームです。 KTCへ入社したときの第一印象?ギャップはあった? 福利厚生の厚さがすごいです。 現場の雰囲気はどんな感じ? 皆さん技術的なキャッチアップの感度が高いですし発信力もあるので刺激になります。 提案しやすい環境だと思います。実際にエンジニアのアイデアからサービスが生まれた事例もあるようで、会社全体的にそのような雰囲気が醸成されている印象です。 ブログを書くことになってどう思った? これまであまり発信をしてこなかったので、とても良い機会だと思いました。今回の入社エントリーに限らず、何かテックに関する話題を記事にしてみたいです。 R.Sさんからの質問 「入社してみて前職から大きく変わったところはありますか?」 前職と比べ、かなり大きなエンジニア組織と感じました(前職ではエンジニアの社員は5名)。正直なところ、誰がどんなプロダクトに携わっていて何をしているのか、全部は理解しきれていません。勉強会をはじめ様々なイベントが部署を跨いで定期的に開催されているので、それらに参加して理解を深めていければと思います。 Taro ![alt text](/assets/blog/authors/nam/newcomers/icon-taro.jpg =250x) 自己紹介 KTC クリエイティブ室にジョインしましたTaroです。 所属チームの体制は? ディレクター・デザイナーの9名体制です。 KTCへ入社したときの第一印象?ギャップはあった? 入社時のオリエン内容で「社員のベクトルを合わせ邁進するぞ」という組織(One Team)感を感じました。 現場の雰囲気はどんな感じ? チームメンバーは明るく親切、クリエイティブに対する意識の高い方々です。 コミュニケーションが活発なので、業務を進めていく上で常に意見やアイディアを交わし合える、刺激ある環境です。 ブログを書くことになってどう思った? 「Tech Blogの過去ログで読んだアレかー」と思いました。 Hanawaさんからの質問 「普段の業務で最もこだわっていることがあれば教えてください」 「課題・ニーズ・価値」における「現状とゴール」です。 S.A ![alt text](/assets/blog/authors/nam/newcomers/icon-sa.jpg =250x) 自己紹介 データ分析部にジョインしたS.Aです。 所属チームの体制は? リーダーと自分を含めて総勢9名。 KTCへ入社したときの第一印象?ギャップはあった? 良い感じにゆるいところに感動しました。 現場の雰囲気はどんな感じ? それぞれ得意分野を持っていると感じており、刺激を得られる現場と感じ取ります。 ブログを書くことになってどう思った? ブログを書くことは初めての経験なので緊張しましたが、良い取り組みだと思いました。 Taroさんからの質問 「入社から1カ月経過しましたが、仕事面で意識するようになったことはありますか?」 スピード感が早いので置いていかれないようにしています。 さいごに みなさま、入社後の感想を教えてくださり、ありがとうございました! KINTO テクノロジーズでは日々、新たなメンバーが増えています! 今後もいろんな部署のいろんな方々の入社エントリが増えていきますので、楽しみにしていただけましたら幸いです。 そして、KINTO テクノロジーズでは、まだまださまざまな部署・職種で一緒に働ける仲間を募集しています! 詳しくは こちら からご確認ください!
アバター
Svelte Tips Hello (or good evening), this is part 5 of our irregular Svelte series. Click here to see the other articles: Insights from using SvelteKit + Svelte for a year Comparison of Svelte and other JS frameworks - Irregular Svelte series-01 Svelte unit test - Irregular Svelte series 02 Using Storybook with Svelte - Irregular Svelte Series 03 Exploring Svelte in Astro - Irregular Svelte series 04 SvelteTips - Irregular Svelte series 05 That's a lot of articles so far! This time, I will use the project we have in the previous articles and explain parts where you may go be like, "I'm stuck!" "what to do?" in a way that is easy to understand. The table of contents looks as follows: SSG Settings Differences Between .page.ts and .server.ts meta and How to Use It (About the Plugin) What is Used in Each Life Cycle SSG Settings With SvelteKit, you can easily configure deployment destinations by using a module called Adapter. The default is an SSR adapter called adapter-auto , so you need to install a module called adapter-static. I remember being stuck at first and wracking my brain. It was named "auto," so it must do something. But that was not the case. By just implementing adapter-static and writing the code in the documentation, I quickly created a build file optimized for static hosting (note to self to read documentations properly...) The official site of Svelte has a Japanese translation of PJ, so having translated documentation available was very helpful :) // without this, you can't build as an SSG import adapter from '@sveltejs/adapter-static'; /** @type {import('@sveltejs/kit').Config} */ const config = { // omitted kit: { adapter: adapter({ pages: 'build', assets: 'build', fallback: null, precompress: false, strict: true }) } }; export default config; Details: https://kit.svelte.jp/docs/adapter-static Differences Between .page.ts and .server.ts This was a setback when SvelteKit v1 was released and it changed significantly. It was a radical change, so you might remember it. Since the release of v1, SvelteKit inserts the following two files by default when fetching data in a page. *.svelte => Files such as UI *.page.server.ts || *.page.ts => a file that defines data such as fetch The files that define data are divided into page.ts and page.server.ts. I didn't understand the difference between *.page.ts and *.page.server.ts at first, so I opted for SSG. However, during transitions, it began fetching data from the API .... Like whaat?! For *.page.ts, it runs on both client-side and server-side For *.page.server.ts, it runs only on the server-side So, if you want to JAMSTACK with SSG, *page.server.ts is the right way to go. https://kit.svelte.jp/docs/load#universal-vs-server So again, please read the documentation! The documentation is great. Correct example of running only on the server side export async function load({ params, fetch }) { const pagesReq = await fetch(`APIURL`); let data = await pagesReq.json(); return { data }; } How to Manage meta Managing meta information poses a common challenge for all frameworks and websites. Before discovering the framework, I used to laboriously work with the trifecta Pug, JSON, and Gulp or Webpack, but with Svelte it became easier to deal with them. <script lang="ts"> import { siteTitle, siteDescription } from '$lib/config'; interface META { title: string; description?: string; } export let metadata: META; </script> <title>{`${metadata.title}|${siteTitle}`}</title> {#if metadata.description} <meta name="description" content={metadata.description} /> {:else} <meta name="description" content={siteDescription} /> {/if} <script lang="ts"> import Meta from '$lib/components/Meta.svelte'; let metadata = { title: 'title, title, title', description: 'description, description, description, description' }; </script> <Meta {metadata} /> You can create and load a meta component like this. You don't have to make it yourself as there are wonderful plugins such as this one out there. https://github.com/oekazuma/svelte-meta-tags Thank you kind stranger!!!! On the Usefulness at Each Life Cycle Finally, the unavoidable life cycle functions Svelte has five life cycle functions: onMount, onDestroy, beforeUpdate, afterUpdate, and tick . onMount As the name implies, this is executed after the component is initially rendered to the DOM. Timing is almost the same as mounted hook in Vue. onDestroy As the name implies, this is also executed when the component is destroyed. You can prevent memory leaks by discarding components when processing is no longer necessary. Also, for server-side components, only this lifecycle function is available. beforeUpdate This component is a lifecycle function used before the DOM is rendered. Also, beforeUpdate is used often when you want to reflect state changes first. Since this life cycle function is used before the DOM is rendered, you need to be careful when writing processing related to the DOM. afterUpdate This function is executed after the component is rendered by the DOM and the data is reflected. It's the last life cycle function to be found on Svelte. tick You can handle the timing after a state is updated and before the state is rendered into the DOM. It is possible to wait for the DOM to be updated before processing anything. It is relatively easy to understand because it has fewer life cycle functions than other frameworks. This would be all for now on my Svelte Tips article today. Conclusion I wrote a special article about Svelte titled "Getting Started with Svelte" in the July 2023 issue of Software Design, a Japanese Magazine. Please feel free to give it a read if you're interested :) (It also includes a tutorial to JAMSTACK in SSG, so give it a try!) https://twitter.com/gihyosd/status/1669533941483864072?s=20
アバター
はじめに こんにちは、KINTO テクノロジーズ ( 以下、KTC ) の SCoE グループの多田です。SCoE は、Security Center of Excellence の略語で、まだ馴染みのない言葉かもしれません。KTC では、この 4 月に CCoE チームを SCoE グループとして再編しました。本ブログでは、その経緯や SCoE グループのミッションなどをご紹介したいと思います。CCoE チームの活動については、 過去のブログ で紹介していますので、興味がありましたらご覧ください。 背景と課題 SCoE グループの設立経緯を説明するため、その前身である CCoE チームについて説明します。CCoE チームは 2022 年 9 月に設立されました。私が、KTC に入社したのが 2022 年 7 月なので、入社直後に設立されたことになります。 設立時に、CCoE の活動内容として掲げたのは以下の 2 つです。 クラウドの「活用」 共通サービスやテンプレート、ナレッジ共有や人材育成を通じて効率的な開発が継続できる クラウドの「統制」 適切なポリシーで統制されたクラウドを自由に使うことができ、常にセキュアな状態を維持できる クラウドの「活用」と「統制」の両面で様々な活動を行いました。ただし、「活用」については、CCoE チーム発足前から同じグループ内の各チームが中心的な役割を果たしていたこともあり、CCoE チームの活動の中心は「統制」が主でした。「統制」に関しては、 過去のブログ で紹介した通り、主に以下の活動を実施しました。 クラウドセキュリティの標準化ガイドラインの作成 セキュリティプリセットクラウド環境の提供 クラウドセキュリティ モニタリング・改善活動 特に「クラウドセキュリティのモニタリングと改善活動」については、プロダクト側が利用・設定するクラウド環境の態勢に不備がある場合、リスクのある設定や操作を確認し、問題があればプロダクト側に改善を依頼・支援するものでした。しかし、プロダクト側の組織ごとにセキュリティに対する考え方や浸透度が異なり、優先度が低く改善が進まないケースもありました。 一方、KTC 全体を見渡すと、「セキュリティ」をカバーする組織が領域ごとに複数存在していました。バックオフィスとプロダクト環境のセキュリティをカバーする組織に加え、CCoE がカバーするクラウドセキュリティで 3 組織がバラバラに存在していました。SOC 業務もそれぞれの組織で実施されており、全社的なセキュリティ対策の合意形成に時間を要したり、プロダクト側から見るとセキュリティ相談窓口がわかりにくくなっていました。全社的にはプロダクト環境のセキュリティをカバーする「セキュリティグループ」が中心的役割を果たしていました。CCoE チームはこのセキュリティグループとプロダクト側の橋渡し的存在となり、「クラウドセキュリティのモニタリングと改善活動」を実施していました。 SCoE グループの設立 SCoE グループの設立は、上述の背景を基に、以下の課題を解決するため設立しました。 クラウドセキュリティ改善活動の浸透 KTC 組織全体のセキュリティ関連組織の統合 「KTC 組織全体のセキュリティ関連組織の統合」に関しては、3 つの組織を 1 つの部門 ( IT/IS 部 ) に統合することで、より効率的かつ迅速な活動が可能となりました。 「クラウドセキュリティ改善活動の浸透」に関しては、IT/IS 部門というセキュリティを含む部署に組織されたことで、全社的なセキュリティに対する取り組みが強化されました。これまでの CCoE の活動は、プラットフォームグループの 1 つのチームとして行われていましたが、組織名に「セキュリティ」が含まれる部署になったことで、セキュリティへのコミットメントが高まりました。また、Cloud CoE から Security CoE への変更は、クラウドセキュリティに特化した組織としてのメッセージ性を高めるとともに、組織のセキュリティ機能を強化することを意味しています。特に、セキュリティグループと同じ部署となったことは、より迅速にセキュリティ改善活動が実施できると思います。 CCoE が1年半で消滅することには、心残りもありましたが、CCoE は元々「統制」を主たる活動内容としていたため、この変化を受け入れることにしました。組織自体はなくなりましたが、CCoE の活動は引き続き全社的なバーチャルな組織として行われています。 SCoE グループのミッション SCoE グループが設立されたことで、ミッションは次のように定義しました。 ガードレール監視と改善活動をリアルタイムで実施する ここで言う、ガードレールは、単に予防的/発見的ガードレールの意味だけでなく、セキュリティリスクを発生するような設定や攻撃などを意味しています。 昨今のクラウドセキュリティを取り巻く状況を見ていると、クラウドの態勢が原因でセキュリティインシデントが発生するケースが多いですし、態勢不備から実際のインシデントまでの時間が急速に短くなっているのが事実だと思います。そのため、セキュリティリスクが発生した場合に、如何に速やかに対応できるか、対応できるような事前準備をどれだけ前もってできるかが、SCoE のミッションだと考えています。 SCoE グループの具体的活動 具体的活動は、ミッション実現のため、以下の考えかたで進めています。 セキュリティリスクを発生させない セキュリティリスクを常に監視・分析する セキュリティリスクが発生したときに速やかに対応する 「セキュリティリスクを発生させない」では、CCoE から継続して、「クラウドセキュリティの標準化ガイドラインの作成」や「セキュリティプリセットクラウド環境の提供」を実施しています。これまでは、AWS が中心でしたが、Google Cloud や Azure についても対応を進めています。また、社内への浸透のため、随時勉強会なども実施しています。 「セキュリティリスクを常に監視・分析する」では、これまでは、CSPM ( Cloud Security Posture Management ) や SOC を対象に実施していましたが、CWPP ( Cloud Workload Protection Platform ) や CIEM ( Cloud Infrastructure Entitlement Management ) にも活動を広げ始めています。SOC については、元々、3 組織でバラバラに実施していたものを 1 つに統合する活動も開始しています。 「セキュリティリスクが発生したときに速やかに対応する」では、設定の自動化やスクリプト化、生成系 AI の活用も視野に入れて検討を始めています。今後、クラウドセキュリティの分野では、生成系AI の活用なくしてセキュア環境を維持することは難しいと考えており、その活用を検討しています。 まとめ KINTO テクノロジーズでは、CCoE チームを SCoE グループとして再編しました。CCoE で実施していたクラウドの「統制」の活動を、よりクラウドセキュリティに特化した組織として実施するために設立しました。 今後、SCoE グループは、クラウドセキュリティの進化をリードする重要な役割を果たしていきます。クラウドの進化と共に、より複雑となるクラウドセキュリティの分野において、セキュリティリスクを最小限に抑え、安全かつ信頼性の高いサービスを提供できるよう、その下支えとなれればと考えています。 最後まで、読んでいただきありがとうございました。 さいごに SCoE グループでは、一緒に働いてくれる仲間を募集しています。クラウドセキュリティの実務経験がある方も、経験はないけれど興味がある方も大歓迎です。お気軽にお問い合わせください。 詳しくは、 こちらをご確認ください
アバター
I tried using Svelte in Astro Hello (or good evening), this is part 4 of our irregular Svelte series. You can find here our previous articles in the series: Insights from using SvelteKit + Svelte for a year *SvelteKit major release supported Comparison of Svelte and other JS frameworks - Irregular Svelte series-01 Svelte unit test - Irregular Svelte series 02 Using Storybook with Svelte - Irregular Svelte series 03 This time, I tried using Svelte in Astro. In this article, although this is a Svelte series, I am going to change my tune a bit. Have you ever heard of Astro, a framework that's currently gaining popularity? Astro is a framework for building websites without relying on client-side JavaScript by default. Astro allows JavaScript to be explicitly specified and loaded in components, rather than being loaded by default. In Astro terms, this concept is commonly referred to as ‘Islands’. Also, as officially stated, Astro supports a variety of popular frameworks! https://astro.build/ Let's take a look at the next features: Zero JavaScript Multi-Page Application (MPA) Various UI frameworks can be integrated into Astro. In this article, I will show you how to use Svelte in Astro. I would like to try various things such as props and bindings. Setting up the Environment Import a Svelte component to Astro Try props with Astro and Svelte Astro and Svlete bindings Setting up the Environment Install Astro and Svelte yarn create astro astro-svelte Install Astro in the astro-svelte directory using Astro's CLI. Now we are ready to run Astro, but we can't use Svelte with this alone. Next, install Svelte and Svelte modules for Astro so that Svelte can run on Astro. yarn add @astrojs/svelte svelte Now that we have a module for running Astro and Svelte, we will write in an Astro config file, astro.config.mjs, that we will be using Svelte. We are now ready to run Svelte on Astro. Thanks to the CLI, the process involves very few steps and is pretty easy. import { defineConfig } from 'astro/config'; // Add here import svelte from '@astrojs/svelte'; // https://astro.build/config export default defineConfig({  // Add here integrations: [svelte()], }); Now that we are ready, let's actually run Svelte on Astro. Import a Svelte component to Astro <script> let text = 'Svelte' </script> <p>{text}</p> First, we created a child Svelte component. This component inserts the string Svelte into the tag. Next, import the Svelte component into the parent component of Astro. --- import Sample from '../components/Sample.svelte' --- <Sample /> You see, it is very easy. I mean, it's amazing! Given that Astro is MPA, it would be possible to use it like Svelte for components, leaving only the routing to Astro. Try props with Astro and Svelte Export the value of the Svelte component above. <script> export let text = '' </script> <p>{text}</p> Insert a string in Astro. --- import Sample from '../components/Sample.svelte' --- <Sample text="Svelte" /> The same string Svelte is now displayed. So, on the other hand, can Props be done with Svelte as a parent? Let's try it. Define a child component of Astro... --- export interface Props { astrotext: '' } const {astrotext} = Astro.props --- <p>{astrotext}</p> Load with Svelte components! src/components/Sample.svelte <script> import Child from './Child.astro' export let text = '' </script> <p>{text}</p> <Child astrotext="Svlete" /> Failed. Apparently, the parent needs to be Astro. Then, what about both parents and children are Svelte? First, create a child component of Svelte. <script> export let svelteChild = '' </script> <p>{svelteChild}</p> Define it in the parent component of Svelte…! <script> import SvelteChild from "./SvelteChild.svelte"; export let text = '' </script> <p>{text}</p> <SvelteChild svelteChild="SvelteChild" /> It worked! It may be obvious, but Svelte to Svelte seems to work. Also, it seems that the files under page must be *.astro files. Failed cases src/pages/+page.svelte src/pages/index.svelte It became clear that to import files with different UI framework extensions, *.astro needs to be the parent. Run Svlete binding in Astro Finally, let's try binding. Binding in Svelte. <script> export let text = '' let name = ''; </script> <input bind:value={name}> <p>{name}</p> <p>{text}</p> The assumption is that the string part “name” will be bound. src/page/index.astro is unchanged, so let's take a look at the screen. Even if entered, it will not be reflected... In Astro, some client-side specific features (e.g. user input into input fields like this) do not work by default. If you want to use these features, binding is possible by using Astro's client:load directive for the imported component. --- import Sample from '../components/Sample.svelte' --- <Sample text="Svelte" client:load /> It worked fine. The client directive is not only limited to :load, so it might be interesting to try out various things. https://docs.astro.build/en/reference/directives-reference/#client-directives Summary Would it really work? I started with a doubt like this, but it is practical enough that it can be used in Astro and UI framework products. Astro appears to be particularly easy to use for corporate websites, and although not mentioned here, its functionalities regarding sitemaps are also robust. This is all on how I tried using Svelte in Astro. My next article will conclude the series on practical tips for Svelte (rules of thumb).
アバター
Hello (or good evening), welcome to the third installment in the intermittent Svelte series. Below are our previous articles in the series: Insights from using SvelteKit + Svelte for a year Comparison of Svelte and other JS frameworks - Irregular Svelte series 01 Svelte unit test - Irregular Svelte series 02 In this installment, we will talk about using Svelte and Storybook. About Storybook I think is known as a tool that simplifies management and operation of UI components, while also offering a range of other functionalities. https://storybook.js.org/ What We Will Do in This Article In this article, I will cover the following three points: Implementing Storybook in a real project Register components in Storybook Run tests on Storybook Let's get started! Implementing Storybook in a Real Project This time, I will integrate Storybook in an ongoing project instead of starting from scratch. The Project To Be Implemented URL https://noruwaaaaaaaay.kinto-jp.com/ This project was made using SvelteKit + microCMS + [S3 + Cloudfront] . They have interesting content, so I recommend you to visit their website! Recommended articles (in Japanese) https://noruwaaaaaaaay.kinto-jp.com/post/93m02vm8chf3/ https://noruwaaaaaaaay.kinto-jp.com/post/fe35u405761/ Deployment Steps npx storybook@latest init Run this command in the directory where the project is located. Doing this completes the initial build of Storybook in your project. A directory called .storybook and a directory under src called stories will be created. That is all for the initial build. Register components in Storybook Try Running Storybook Try launching Storybook. Run yarn storybook . You will see a screen like this. Since the components in src/stories/ and **.stories.ts are not used in the project, I will delete all of the files in stories, put in Button.stories.ts again, and register the components that are actually used for Noru-Way in Storybook. Try Registering Components in Storybook Here is the visual and code of a button that is an actual component in the project. <script lang="ts"> export let button: { to: string; text: string }; </script> <div class="button-item"> <a href={button.to} class="link-block" > <span class="link-block-text">{button.text}</span> </a> </div> Let's register the button component above to Storybook. import type { Meta, StoryObj } from '@storybook/svelte'; // Register the button component import Button from '$lib/components/Button.svelte' const meta: Meta<Button> = { title: 'Example/Button', component: Button, tags: ['autodocs'], }; export default meta; type Story = StoryObj<Button>; export const Primary: Story = { // Register the button component object export let args: { button: { to: '', text: '' } }, }; The screen will be updated to look like this. Let's try actually replacing text on the storybook screen. I was able to confirm that it actually changed. Albeit very minimal, that is all for the button component placement. Try Testing with Storybook I will try to test the actual stories file for the component I've added, making the process as simple as possible. Deployment Steps First, install the modules required for testing. yarn add --dev @storybook/test-runner Running Tests on Storybook Let’s test it. yarn test-storybook If you run the above and the test passes, the output will look like this. If the test fails, it will look something like this, depending on which part of the test fails I was able to test to see if storybook was working properly. There are many options available, so if you want to know more, please see below. https://storybook.js.org/docs/svelte/writing-tests/test-runner Conclusion As you saw, I was easily able to install Storybook, added stories to components, and tested Storybook to see if it works as intended. Although it was very hard to add a Storybook to an HTML-only project when I tried in the past, I found that it's actually pretty easy as shown in the demonstration this time. That made me realize that we live in good times. That concludes today’s article on using Svelte and Storybook. Next time, I will explore something different by integrating Svelte with Astro . Hope you look forward to the next one!
アバター
Introduction Hello, I am Suzuki and I joined the company in November! I interviewed those who joined the company in November 2023 about their first impressions of the company and summarized them in this article. I hope this content will be useful for those who are interested in KINTO Technologies, and serve as a reflection for the members who participated in the interview. Shirai Self-introduction I am Shirai from Platform Group, who joined the company in August. I work on designing and building AWS infrastructure. I thought this entry would be interesting, so I decided to participate in it! How is your team structured? We have two members at the Osaka Tech Lab, and five at the Jimbocho Office in Tokyo, making a total of seven. What was your first impression of KINTO Technologies when you joined? Were there any surprises? The change from a full remote environment to one where I primarily work onsite (1-2 days per week working from home) was a bit confusing. On the other hand, I now feel that it is easier to discuss things when I'm at work, which leaves me with a positive impression. I had the impression that everyone had strong technical skills. Maybe because I wasn't constantly involved with the infrastructure, at first I was too busy trying to grasp their discussions. What is the atmosphere like on site? It is very homely! Since I am basically at the office, I can immediately consult with other team members, which is super helpful. In addition, we have introduced Gather , so you can easily go to the person you want to consult with while working from home. I think the reason why we are able to discuss matters easily is that we feel at home and have a good relationship with each other to discuss things outside of work. How did you feel about writing a blog post? I take it as a challenge, as any other thing. I think KINTO Tech Blog has a lot of good articles, and this is a great stepping stone. Actually, I have already written an article on the Advent Calendar titled " Deployment Process in CloudFront Functions and Operational Kaizen ", so please have a look! Question from November newcomer to another Are there any clubs or communities within the company where people with similar interests can get together? If so, what clubs do you participate in? There are many! Tech Blog also introduced sports clubs! I am participating in the running circle (RUN TO), and E-sports Club, even though they haven't been featured yet. AKD Self-introduction I'm AKD from the Operation Process team, Corporate IT Group. I'm the sole member among the November newcomers to be at the Osaka Tech Lab. I work as a Corporate Engineer. It is commonly referred to as the information systems team. How is your team structured? Our team, consisting of four members, is responsible for on/off boarding, visualizing, and improving processes related to our PCs and various SaaS. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I initially thought that as an engineering company, there might be limited communication opportunities. However, I've found there are numerous opportunities for communication, including regular study sessions and meetings to review department meeting minutes, which was a good surprise. What is the atmosphere like on site? I feel that everyone is building a relationship of mutual respect without hesitation in a positive way. The Corporate IT Group comprises members stationed in Muromachi (Tokyo), Jimbocho (Tokyo), Nagoya, and Osaka, organized into five teams. Despite its size, we maintain a constant Zoom channel for communication, where conversations take place across locations and teams, and I believe this fosters a positive atmosphere. How did you feel about writing a blog post? I noticed a previous post in the company, but I thought it was written by only selected people, but I was surprised that everyone is writing! Also, it is simple, but I like it because it has a sense of a peer group. Question from November newcomer to another After a month at the Osaka Tech Lab, could you share your impressions of the atmosphere you've experienced? It is a highly inclusive environment with a welcoming atmosphere for anyone, from people who briefly come here on business trips to new hires. SSU Self-introduction This is SSU from KINTO ONE Development Group. As a web director, I am responsible for the web direction of DX projects on Toyota dealerships. How is your team structured? I am a member of the DX Planning team within the Owned Media & Incubation Development Group. Our team's mission is to provide a wide range of mobility solutions to customers by addressing the bottlenecks within Toyota dealerships through the power of IT. There are seven members in total: two producers, three directors, and two designers. What was your first impression of KINTO Technologies when you joined? Were there any surprises? My first impression is that there are more young people and more freedom compared to the stereotypical image of the automotive industry. What is the atmosphere like on site? It's only been a month since I joined the company, but I feel that the DX Planning team is filled with distinct personalities and everyone brings their own uniqueness to the table. I think our team's strength lies in these differences because they allow us to notice what might be overlooked when we work on a project together, whether through individual communication or in meetings. How did you feel about writing a blog post? I thought my first time blogging had finally arrived. Question from November newcomer to another What is your favorite emoji from the KINTO Technologies' Slack Workspace? I like the mushroom emoji running with a determined face. kiki Self-introduction I am Kiki from Human Resources Group. I participate in the hiring process as well as the Tech Blog operations project team. How is your team structured? The HR team is currently made up of 6 people (as of December 2023) including myself. We have members with diverse personalities, and everyone takes an interest in each other's work. Together, we work diligently and are actively involved in recruitment tasks on a daily basis. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I thought the organization was flatter and more open than I expected. Perhaps it's because of my role in HR, but I've noticed that discussions seldom hinge on who said something, but rather on a "what is the best course right now" perspective for moving the team forward. On my second week of joining the company, I participated in the information sharing meeting of the Osaka Tech Lab, and I have the impression that there are many warm people who welcomed me like friends right away! What is the atmosphere like on site? In order to create an easy-to-talk space, we often engage in small talk. It is an environment where you can stay tuned to what's happening within the organization and among its people. I was quite reserved for the first two weeks after joining the company, as I was still getting acquainted. However, given my prior experience in recruiting, I appreciate this atmosphere where I can easily discuss any questions I may have, such as "What’s happening here?" at any time. How did you feel about writing a blog post? Simply, "happy!". The team members of the Tech Blog project team have also been in touch since the first month of us joining the company. I haven't been active in external communication, so I worry a bit about potential problematic statements. However, I enjoy writing, and I see it as a valuable space for exploration. Question from November newcomer to another How do you relieve stress? I listen to rock music, a genre I don't usually listen to much. Franz Ferdinand and Yoru no Honki Dance are particularly enjoyable. Also, I came across an article suggesting that doing funny dances at home is good for relieving stress, so I've been trying to dance at home where others can't see me. (Highly recommended!) Y.Suzuki Self-introduction I am Suzuki from Project Promotion Group. I am in charge of front-end engineering at KINTO FACTORY. How is your team structured? Although there are some service providers and people working in other divisions, the team is made up of KINTO Technologies' members from management to implementation. Among them, the front-end team currently consists of six members, with another new member joining in December. What was your first impression of KINTO Technologies when you joined? Were there any surprises? Before joining the company, I thought it would be a more formal environment due to the higher average age and the nature of being a business corporation. But when I joined the company, there was plenty of flat communication and a lot of openness to new initiatives and things that seemed interesting. I found the environment to be filled with individuals who were older and held higher positions than myself who were more playful and inquisitive while utilizing their experience, and who successfully balanced being casual with maturity. Since my previous job was mainly work from home, I thought I would have a hard time commuting to work. However, I've found that I can easily adapt to the environment, and that I actually really enjoy the hybrid work style😳 What is the atmosphere like on site? There were many things I didn't understand at first, so I thought I had to build a relationship where we could easily talk to each other. This is why less than a week after I joined the company, I tried having "Nerunerunerune" (a candy you make) at my desk. I have been chatting and smiling with everyone, and recently, I have been eating Mandarin oranges together with my team members while talking about work. In about 2 weeks upon joining, when I expressed during one-on-one meetings and meals that I could do other things besides engineering, I was told , "There aren't many people who can do that, so I'll ask if I can make good use of it." I am currently looking to expand my work beyond front-end work to improve the product! If the timing is right, we have lunch together when we come to work, and I find that there are many opportunities to communicate beyond work. How did you feel about writing a blog post? I had the impression that blogs for engineers focus on technology, meaning that much of the content already exists, requiring extensive verification and making it challenging to write, even when deciding on a subject in the first place. But this time, it was a simple entry, so I thought I could provide some useful information to those interested in KINTO Technologies. Question from November newcomer to another department member What is the most enjoyable moment at your work? I find joy in getting inquiries, even about the simplest things. Although I have only been with the company for a short time, I am glad to know that there are aspects where colleagues can rely on me and that there are tasks I can contribute to. I am trying to absorb more from others that I can respect, in order to broaden the range of things I can do. T.F Self-introduction I am T.F from the Project Promotion Group. I am in charge of the back-end of KINTO ONE used car. How is your team structured? Front-end, Back-end, and BFF (backend for frontend) are handled by employees and subcontractors, respectively. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I was surprised that could get paid holidays as soon as I joined the company. I'm thinking of moving to a new place soon, so it would be helpful for that. What is the atmosphere like on site? There are many friendly people here. The atmosphere is conducive to asking questions and making suggestions. How did you feel about writing a blog post? It is a strange feeling to transition from being a reader before joining the company to being a writer. Question from November newcomer to another department member What were your duties during your first month in the company? I've just joined the company so I'm only doing simple tasks so far. I did smaller development, code reviews, and estimates for projects that are scheduled to begin on a full scale next year. I am working behind the scenes to incorporate domain-driven design, clean architecture, among other things. A.N Self-introduction I am A.N from the Common Service Development Group. I am a Product Manager for the membership platform underlying KINTO ID. How is your team structured? There are six members (including subcontractors) What was your first impression of KINTO Technologies when you joined? Were there any surprises? I caught a cold from the first day of joining the company, and I had to take off on the third day, but I was able to receive sick leave from my first month, which was a great help. What is the atmosphere like on site? I think it also depends on each manager's policy, but the atmosphere here is respectful of the freedom of each member. Everyone is an expert, so they act autonomously. How did you feel about writing a blog post? I am afraid it will have some impact on the company's public relations. Question from November newcomer to another department member KINTO Technologies has a Slack channel for hobbies and activities outside of work. Is there anything you are interested in? Just today, I learnt about this channel where every morning when I come to work I just comment "Good Morning!" It is a mystery why such a channel was created, but it is soothing because everyone participating seems to be having fun. F.T Self-introduction I am F.T from the Mobile App Development Group. I am in charge of the Unlimited app for Android. How is your team structured? The Android team consists of 5 members, including myself. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I was surprised that there was a thorough orientation, despite being a mid-career hire. I found it amazing how few boundaries are (both physical and psychological) between teams in the office. (e.g. study sessions with Android developers, communication regardless of OS or assignments people are in charge of) What is the atmosphere like on site? There is a lot of time to work in silence. However, there is a friendly atmosphere where you can ask questions immediately when you need help. How did you feel about writing a blog post? I was full of anxiety. Question from November newcomer to another department member After a month with the company, what do you think is the best thing about joining this company? As an engineer, I am honestly happy to be working in an environment filled with high-skilled professionals. Many are multi-talented and I learn a lot from them even about things unrelated to work. W.Song Self-introduction My name is W. Song from the Data Engineering team at Data Analytics Group. I am mainly responsible for data linkage. How is your team structured? We are four total including the team leader and members. What was your first impression of KINTO Technologies when you joined? Were there any surprises? It's great that there are bookshelves in the office. There are many popular books and I feel that everyone is highly motivated to learn. In fact, it may be my own assumption rather than the surprise or gap. I had seen pictures of the Office before I joined, especially the one of the junction, which looked very stylish. I thought it was a free address office. What is the atmosphere like on site? I think I can speak slowly. Although everyone was busy, they provided me with detailed explanations. I really appreciate it. I feel that this is the first environment in a long time where we can communicate a lot. How did you feel about writing a blog post? I think it is a really great output method. I feel that I can not only promote myself but also connect with people who share similar concerns and ideas, and potentially make friends. Question from November newcomer to another department member What has changed since joining KINTO Technologies? My interest in cars is deepening. I come to work three times a week, so I should be thinner than before. The impression of this emoji 😇 has changed a lot. I used to use this phrase often because I thought it meant "Happy, I did it, it went well," but I was surprised to find out that it actually meant "I'm screwed, I'm finished." Conclusion Thank you very much for sharing your thoughts in the midst of your busy schedule right after joining the company! The number of new members of KINTO Technologies is increasing day by day. I hope you look forward to more articles about our new members joining the company, assigned to various divisions. Furthermore, KINTO Technologies is seeking individuals who can collaborate across various divisions and occupations! For more information, please click here .
アバター
Introduction Hello! This is Hasegawa ( @gotlinan ), an Android engineer at KINTO Technologies! I usually work on the development of an app called myroute. Check out the other articles written by myroute members! Jetpack Compose of myroute Android App A Compose Beginner's Impressive Experience With Preview In this article, I will explain Structured Concurrency using Kotlin coroutines. If you already know about Structured Concurrency, but do not know how to use coroutines, please refer to Convenience Functions for Concurrency . Structured Concurrency? So what is Structured Concurrency? In Japanese, I think it is like "structured parallel processing." Imagine having two or more processes running in parallel, each correctly managing cancellations and errors that may occur. Through this article, let’s learn more about Structured Concurrency! I'll be introducing you two common examples here. 1. Wish to Coordinate Errors The first example is to execute Task 1 and Task 2, and then execute Task 3 based on the results. In the illustration, it should look like this: After executing Task 1 and Task 2, execute Task 3 according to the results. In this case, if an error occurs in Task 1, it is pointless to continue with Task 2. Therefore, if an error occurs in Task 1, Task 2 must be canceled. Similarly, if an error occurs in Task 2, Task 1 should be canceled, eliminating the necessity to proceed to Task 3. 2. Not Wanting to Coordinate Errors The second common example is when there are multiple areas on the screen, each displayed independently. If we create a diagram, it would look like this: Multiple areas on the screen, each displayed independently. In this case, even if an error occurs in Task 1, you may want to display the result of Task 2 or Task 3. Therefore, even if an error occurs in Task 1, Task 2 or 3 must be continued without canceling. I hope these examples were clear to you. With coroutines, the above examples can be easily implemented based on the idea of Structured Concurrency! However, for a deeper understanding is necessary to grasp the basics of coroutines. From the next section we will actually learn about coroutines! If you know the basics, skip to [Convenience Functions for Concurrency](#Convenience Functions for Concurrency). Coroutines Basics Let's talk about the basics of coroutines before explaining it in detail. In coroutines, asynchronous processing can be initiated by calling the launch function from CoroutineScope . Specifically, it looks like this: CoroutineScope.launch { // Code to be executed } So, why do we need to use CoroutineScope ? We need to because in asynchronous processing, "which thread to execute" and "how to behave in case of cancellation or error" are very important. CoroutineScope has a CoroutineContext . A coroutine run on a given CoroutineScope is controlled based on CoroutineContext . Specifically, CoroutineContext consists of the following elements: Dispatcher : Which thread to run on Job : Execution of cancellations, propagation of cancellations and errors CoroutineExceptionHandler : Error handling When creating a CoroutineScope , each element can be passed with the + operator. And a CoroutineContext is inherited between parent-child coroutines. For example, suppose you have the following code: val handler = CoroutineExceptionHandler { _, _ -> } val scope = CoroutineScope(Dispatchers.Default + Job() + handler) scope.launch { // Parent launch { // Child 1 launch {} // Child 1-1 launch {}// Child1-2 } launch {} // Child 2 } In this case, CoroutineContext is inherited as follows. Inheritance of CoroutineContext Well, if you look at the image, it looks like Job has been newly created instead of inheriting it, doesn't it? This is not a mistake. Although I stated that the " CoroutineContext is inherited between parent-child coroutines," strictly speaking, it is more correct to say that a" CoroutineContext is inherited between parent-child coroutines except for Job . Then, what about Job ? Let's learn more about it in the next section! What is a Job? What is a Job in Kotlin coroutines? In short, it would be something that "controls the execution of the coroutine" Job has a cancel method, which allows developers to cancel started coroutines at any time. val job = scope.launch { println("start") delay(10000) // Long Process println("end") } job.cancel() // start (printed out) // end (not printed out) The Job associated with viewModelScope and lifecycleScope , which Android engineers often use, are canceled at the end of their respective lifecycles. This allows the developer to correctly cancel any ongoing processes without requiring users to be mindful of switching screens. Such is the high importance of a Job , which also plays the role to propagate cancellations and errors between parent and child coroutines. In the previous section, I talked about how Job is not inherited, but using that example, Job can have a hierarchical relationship as shown in the image below. Hierarchical Relationship of Job A partial definition of Job looks like this: public interface Job : CoroutineContext.Element { public val parent: Job? public val children: Sequence<Job> } It allows parent-child relationships to be maintained, and it seems that parent and child Job can be managed when cancellations or errors occur. From the next chapter, let's see how the coroutine propagates cancellations and errors through the hierarchical relationships of Jobs ! Propagation of cancellations If the coroutine is canceled, the behavior is as follows. Cancels all of its child coroutines Does not affect its own parent coroutine *It is also possible to execute a coroutine that is not affected by the cancellation of the parent coroutine by changing CoroutineContext to NonCancellable . I will not talk about this part in this article since it deviates from the theme of Structured Concurrency. cancellation affects downward in the Job hierarchy. In the example below, if Job2 is canceled, the coroutine running on Job2 , Job3 , and Job4 will be canceled. Propagation of cancellations Propagation of Errors Actually, Job can be broadly divided into Job and SupervisorJob . Depending on each, the behavior when an error occurs will vary. I have summarized the behavior in the two tables below: one for when an error occurs in its own Job, and the other for when an error occurs in a child Job . When an error occurs in Job Child Job its own Job to Parent Job Job Cancel all Complete with errors Propagate error SupervisorJob Cancel all Complete with errors No propagate error When an error propagates from Child Job other child jobs its own Job to Parent Job Job Cancel all Complete with errors Propagate error SupervisorJob No action No action No propagate error The images representing the behavior when an error occurs with reference to the two tables are as follows for Job and SupervisorJob respectively. For Job If an error occurs in Job2 of a normal Job The Child Job, Job3 and Job4 will be canceled Its own Job, Job2 completes with errors Propagates the error to the Parent Job Job1 Cancels Job1 's other Child Job, Job5 . Job1 completes with errors For SupervisorJob If an error occurs in Job2 of a normal Job The Child Job, Job3 and Job4 will be canceled Its own Job, Job2 completes with errors Propagates the error to the Parent SupervisorJob, Job1 As a reminder, the SupervisorJob1 with the error propagated does not cancel the other Child Job ( Job5 ), and normally completed itself. Moreover, you can use invokeOnCompletion to check whether Job was completed normally, by error, or by cancellation. val job = scope.launch {} // Some work job.invokeOnCompletion { cause -> when (cause) { is CancellationException -> {} // cancellation is Throwable -> {} // other exceptions null -> {} // normal completions } } Exceptions Not Caught By the way, how about exceptions not caught by coroutine? For example, what happens if an error occurs or propagates in Job at TopLevel? what happens if an error occurs or propagates in SupervisorJob ? And so on. The answers are: CoroutineExceptionHandler is called if specified. If CoroutineExceptionHandler is not specified, the thread's default UncaughtExceptionHandler is called. As mentioned earlier in Coroutines Basics , CoroutineExceptionHandler is also a companion to CoroutineContext . It can be passed as follows: val handler = CoroutineExceptionHandler { coroutineContext, throwable -> // Handle Exception } val scope = CoroutineScope(Dispatchers.Default + handler) If CoroutineExceptionHandler is not specified, the thread's default UncaughtExceptionHandler is called. If the developer wishes to specify, write as follows: Thread.setDefaultUncaughtExceptionHandler { thread, exception -> // Handle Uncaught Exception } I had misunderstood until writing this article that if I used SupervisorJob , the application would not complete because the error would not propagate. However, SupervisorJob only does not propagate errors on the coroutine's Job hierarchy. Therefore, if either of the above two types of handlers are not defined accordingly, it may not work as intended. For example, in an Android app, the default thread UncaughtExceptionHandler causes the app to complete (crash) unless specified by the developer. On the other hand, executing normal Kotlin code will just display an error log. Also, slightly off topic, you may be wondering whether try-catch or CoroutineExceptionHandler should be used. When an error is caught by CoroutineExceptionHandler , the coroutine Job has already completed and cannot be returned. Basically, you can use try-catch for recoverable error. When implementing based on the idea of Structured Concurrency, or when you want to log errors, setting up a CoroutineExceptionHandler seems like a good approach. Convenience Functions For Concurrency The explanation was a little long, but in coroutines, functions such as coroutineScope() and supervisorScope() are used to achieve Structured Concurrency. coroutineScope() 1. Remember Wish to Coordinate Errors? You can use coroutineScope() in such an example. coroutineScope() waits until all started child coroutines are completed. If an error occurs in a child coroutine, the other child coroutines will be canceled. The code would be as follows: Child coroutine 1 and Child coroutine 2 are executed concurrently Child coroutine 3 is executed after Child coroutine 1 and Child coroutine 2 are finished Regardless of which Child coroutine encounters an error, the others will be canceled. scope.launch { coroutineScope { launch { // Child 1 } launch { // Child 2 } } // Child 3 } supervisorScope() 2. Remember Wish Not to Coordinate Errors? You can use supervisorScope() in such an example. supervisorScope() also waits until all started child coroutines are completed. Also, if an error occurs in a child coroutine, the other child coroutines will not be canceled. The code would be as follows: Child coroutine 1, Child coroutine 2 and Child coroutine 3 are executed concurrently Errors in any child coroutine do not affect other child coroutine scope.launch { supervisorScope { launch { // Child 1 } launch { // Child 2 } launch { // Child 3 } } } Summary How was it? I hope you now have a better understanding of Structured Concurrency. While there may have been several basics to cover, understanding these basics will help you when navigating more complex implementations. And once you can write structured concurrency well, enhancing the local performance of the service will become relatively easy. Why not consider Structured Concurrency if there are any bottlenecks needlessly running in series? That's it for now!
アバター
Introduction Hey there! I'm Viacheslav Vorona, an iOS engineer. This year, my colleagues and I had an opportunity to visit try! Swift Tokyo , an event that got me thinking about some tendencies within the Swift community. Some of them are fairly new, while others have been around for a while but have recently evolved. Today, I would like to share my observations with you. The elephant in the room... Let's get it out of the way: the much anticipated Apple Vision Pro was released roughly 2 months before try! Swift, so it only makes sense that the conference room was full of Apple fans excited about it. People who haven't tried the gear out yet, were looking for any opportunity to put it on their heads for a couple of minutes, pinching the air with their fingers. All seats in the room were occupied during the talk about the implementation of a visionOS app by Satoshi Hattori . The application itself was as simple as it could get: just a circular timer floating in the virtual space in front of the user, but once Hattori-san actually connected the headset and started to show the results of his work in real time, the audience went wild. I could also mention that spatial computing enthusiasts organized their own small, unofficial meeting on the second day of the conference. Unlike some other devices from Apple, the Vision Pro is forming its own quite noticeable sub-community within the Swift community. All the geeks who grew up watching futuristic virtual devices in movies are now feeling like they are getting closer to their cyberpunk dreams. It's exciting—or scary, depending on your perspective. The choice is yours. Oh, and of course, we can't move to the next topic without an honorable mention of the "Swift Punk" performance at the conference opening, which was also inspired by Vision Pro. $10000+ worth of swag scenic props New Swift frontiers This trend is not quite new, but recently it is getting some exciting development in multiple directions at once. I am talking about the Swift community striving to escape its Apple devices homeland and expand beyond. Some things like server-side Swift have been around for a while, for example, Vapor was out since 2016 and even though it wasn't widely adopted, it keeps running. Tim Condon from the Vapor Core Team did a great presentation on large codebase migration at try! Swift. The topic was largely inspired by the migration Vapor is undergoing at the moment to fully support Swift Concurrency by version 5.0. According to Tim, that version is likely to be released in summer 2024, so if you are interested in trying out server-side Swift, that might be a great time to start. Tim Condon, the man behind Vapor. Nice shirt, by the way. To accompany your Swift-written API, you might also try to implement a webpage using that same Swift language. Conveniently, that was the topic of the talk done by Paul Hudson . His lecture on leveraging Swift result builders for HTML generation was clear and exciting, just as one would expect from such an experienced educator as Paul. The climax of his speech was the announcement of Ignite , a new site builder by Paul using the exact same principle he was talking about in his speech. Paul Hudson, the man behind... a lot of things. Including Ignite from now on. Another memorable presentation that falls into this category was done by Saleem Abdulrasool , a big cross-platform Swift enthusiast, who was talking about differences and similarities between Windows and macOS and the challenges Swift developers would face should they try to make a Windows application. Last, but not least, there was a curious presentation by Yuta Saito , who was talking about tactics to reduce Swift binaries. This topic might seem unrelated to the trend I'm talking about here, but that changed when Saito-san showed the audience a simple Swift app deployed to Playdate , a tiny handheld console. Truly impressive. It is pleasing to see that Swift is not only gaining new capabilities on Apple platforms but also relentlessly exploring new frontiers. Friend Computer Lastly, I would like to talk about AIs, LLMs, and so on, a topic that was all over the place during the last couple of years and keeps emerging every time a new "more-powerful-than-everything-else" model is released. In a digital gold rush, software companies nowadays are trying to apply AI processing to anything possible. Of course, the Swift community could not stay unaffected by it, and at try! Swift, this phenomenon was reflected in multiple different ways. One of the first presentations at the conference, done by Xingyu Wang , an engineer from Duolingo, was dedicated to the Roleplay feature introduced by her company in collaboration with OpenAI. She discussed the utilization of an AI-powered backend, optimization challenges such as AI-generated responses taking significantly longer times, and the techniques and tricks Xingyu's team applied to mitigate them. Overall, the presentation was optimistic, painting a bright image of the endless opportunities provided by AI. On the other side of the spectrum, there was a talk by Emad Ghorbaninia titled "What Can We Do Without AI in the Future?" which caught my attention before the conference. I was quite curious about what it would entail. The talk turned out to be a thoughtful reflection on the challenges we, as developers and humans, are about to face with the further development of AI. To put it simply, Emad's general thought is that we should focus on the most human aspects of our creative process to not lose the race against the incoming generation of silicon-brained developers. Hard to disagree. Conclusion Reflecting on the diverse discussions at try! Swift Tokyo, it's fascinating to see how the Swift community continuously evolves and adapts to new technological landscapes. From embracing groundbreaking hardware like the Apple Vision Pro to exploring new realms with server-side Swift and AI integrations, these developments highlight a community in flux, responsive to the broader tech environment. This curiosity and willingness to innovate ensure that Swift is not just a language confined to iOS development but a broader toolset that pushes the boundaries of what's possible in software. As we look forward, the dynamic interplay between technology and developer creativity within the Swift community promises to bring even more exciting advancements. It's a thrilling time to be part of this vibrant ecosystem.
アバター