TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

936

はじめに こんにちは、KINTO FACTORY のフロントエンド開発を担当しているきーゆのです。 今回は Visual Studio Code(以下 VS Code)で React プロジェクトをデバッグするための環境を構築した件について、まとめてみたいと思います。 これまで VS Code は優秀なメモ帳程度にしか使って来なかったので、結構苦戦しました(主に言語の壁)。 今まさに VS Code にデバッグの環境を構築しようとしている皆さん。 どうぞ私の屍を超えていってください。幸あれ。 環境情報 OS : macOS Sonoma 14.1.2 VS Code : ver1.85.1 Node.js : 18.15.0 terminal : zsh 構築手順 1. launch.json のセットアップ launch.json を追加して、デバッグ用の起動構成を構築していきます。 VS Code 左サイドメニューにある「実行とデバッグ」メニューを選択します。 選択後に表示される「launch.json ファイルを作成します」をクリックすると、 launch.json ファイルがプロジェクト内に作成されます。 :::message 初回作成時にデバッガーを選択することになりますが、今回は React なので Node.js を選択します。 ::: launch.json を作成します 作成直後の launch.json には、デフォルトの起動構成が追加されています。 自動生成直後はデフォルトの起動構成が追加されています 2. 新しい起動構成の追加 先ほど追加した launch.json に、デバッグ用の起動構成を追加します。 追加する起動構成は以下になります。 { "name": "[localhost]Chromeデバッグ", "type": "node-terminal", "request": "launch", "command": "npm run dev", "serverReadyAction": { "pattern": "started server on .+, url: (https?://.+)", "uriFormat": "%s", "action": "debugWithChrome" }, "sourceMaps":true, "trace": true, "sourceMapPathOverrides": { "webpack:///./*": "${webRoot}/src/*" } } 編集後の launch.json は以下画像のようになります。 私はデフォルトの起動構成を削除しましたが、残しておいても特に問題はありません。 また、起動時のコマンドを変えたい場合は command プロパティを編集したり、デバッグ時に複数のコマンドを実行したい場合は preLaunchTask プロパティでタスクを追加したりすることも可能です(この辺の話は今回は割愛します)。 name プロパティの値が起動構成の名前になります 3. デバッグ実行開始 あとは F5 キーを押すだけで、デバッガーが起動してくれます。 起動に成功すると中央上部にデバッグ用のボタンが表示されます 中央上部に表示されたデバッガーボタンや F キーを使って、ステップイン/ステップオーバー等を実行することができます。 困った時は 以下は私が実際に遭遇した"困った!"たちです。 運悪く遭遇してしまった方の一助になれば幸いです。 ◽ デバッグ時のターミナルが"sh-3.2$"で実行され、"npm command not found"になる VS Code を再起動することで解決します。 どうやら Microsoft によって自動的に VS Code が起動した場合に発生するようです。 私の環境の場合、PC 起動時 Microsoft365 にログインするのですが、ログイン成功時に VS Code も自動で起動するところでこの問題に遭遇しました。 ◽️ npm インストール済みにも関わらずデバッグを開始すると"npm command not found"になる .vscode/settings.json ファイルに以下を追記してください。 ファイル自体を作成していない場合は、作成しておきましょう。 { // npm scripts コマンドを実行するためのpath設定 "terminal.integrated.profiles.osx": { "zsh": { "path": "/bin/zsh", "args": ["-l", "-i"] } } } :::message ターミナルの実行環境が bash の場合は、プロパティ名と path を zsh → bash に変更してください ::: これによりデバッガーの実行ターミナルに path が通るようになり、npm コマンドを実行できるようになります。 まとめ 今回は VS Code で React プロジェクトをデバッグするための環境構築についてまとめてみました。 まだまだ VS Code で 遊べる 効率化できると思うので、次回は tasks.json でさらに効率化していければと思います。 これは私見ですが、開発時にデバッグできると QOL が爆上がりして開発生産性が向上します。 これによる副次的効果として、オフィス内での笑顔率向上や通勤時幸福度の向上が見込める...かもしれません。 皆さんのデバッグライフに幸あれ。 読んでいただき、ありがとうございました! 最後に、私の所属する KINTO FACTORY では一緒に働く仲間を募集しています。 ご興味があればぜひ下記の求人をチェックしてみてください! @ card @ card
アバター
自己紹介 はじめまして!モバイルアプリ開発Gにて my routeアプリ のAndroid側の開発を担当しておりますRomieです。 前職からAndroidの開発を始めて2年経ちましたが、個人開発も含めてレイアウトは全てxml形式で実装してきたためComposeにきちんと触れたのはお恥ずかしながら2023年12月にKINTOテクノロジーズ株式会社(以下KTC)に入社してからになります。 TechBlogに記事を書くのも初めてになります! こんな人に読んでほしい 本記事は以下が対象です。 Androidの開発超初心者の方 さまざまな事情でxml形式でしかレイアウトを書いたことがなくComposeを全くご存知ない方 実機で動作確認する際なかなか修正した箇所を表示できず困っている方 Previewとの出会い 入社して一番最初Composeのコの字も知らないということで、以下の画面をxml形式からComposeで実装し直す作業を行いました。 ![About my route画面](/assets/blog/authors/romie/2024-02-08-compose-preview-beginner/03.png =200x) About my route画面 早速コードリーディングを始めると、Previewを冠した謎の関数が。 @Preview @Composable private fun PreviewAccountCenter() { SampleAppTheme { AccountCenter() } } その関数はアカウントセンターのボタンのPreviewを表示するためのものでした。 ですがprivate関数にもかかわらず同じktファイル内のどの箇所でも呼び出されていなかったため「この関数は使用されていないのだろう」と考えPreview周辺のコードは触らずに実装を進めることにしました。 そして無事Compose化の作業を終え、pull requestを出したところcommentが来ました。 「Previewがないので作ってください!」 「呼び出されない関数を作って何の意味があるんだろう」と思いつつ他を真似して画面全体のPreviewを実装し、buildして実機が問題なく動いていることを確認しましたが、この時は実機ばかり見ていました。 Previewは何のためのものなのだとまだボンヤリしつつ何気なくAndroid StudioのSplit画面を見ると、そこには実機で表示されているのと全く同じ画面がAndroid Studioに表示されているではありませんか。 「Previewってそういうことか、関数を呼び出さなくてもSplit画面で表示するためのものなのか!」 公式ドキュメント ^1 にもきちんと書いてありました。 アプリをデバイスやエミュレータにデプロイする必要はありません。幅と高さの制限、フォントのスケーリング、テーマがそれぞれ異なる、特定のコンポーズ可能な関数を複数プレビューできます。アプリの開発を行うとプレビューが更新され、変更内容をすばやく確認できます。 Previewと動作確認 ある日、my routeアプリのルート詳細画面に駅出口からの進行方向セクション8方向の画像と文言を追加する作業を行いました。 実装自体はすぐにできましたが、問題は動作確認でした。8方向分の画像と文言が正しく追加できているかどうか確認するのに実際に動作させていると非常に時間がかかります。 再現手順は以下の通りです。 ![進行方向セクション表示の再現手順](/assets/blog/authors/romie/2024-02-08-compose-preview-beginner/01.gif =150x) 進行方向セクション表示の再現手順 では、効率的に8方向分全て確認するにはどうすればよいでしょうか。 更にUI表示が崩れていたり違った画像が表示されていたりして修正し最初から確認しようとすると、余計時間がかかります。 ここで活躍するのがPreviewです。 以下の通り実装します。 @Preview @Composable fun PreviewWalkRoute() { SampleAppTheme { Surface { WalkRoute( routeDetail = RouteDetail(), point = Point( pointNo = "0", distance = "200", direction = Point.Direction.FORWARD, ), ) } } } Buildを通しAndroid StudioでSplit画面を確認すると、ご覧の通り。 進行方向セクションのプレビュー画面 確認したい方向を入れるだけで正しい画像と文言が入っているか確認できるのです。 そして実際に表示が崩れているかは1パターンのみ確認できればOK!これで大幅に動作確認の時間を短縮できます。 まとめ これからもPreviewだけでなくComposeの諸々を体感することがもっと増えると思います。 とりあえず軽く触っただけでもこのような感動があったので共有しました! Compose超初心者の感動体験、これからもお付き合いのほどよろしくお願いいたします。
アバター
Introduction Nice to meet you! I am yama from the Quality Assurance Group. I joined KINTO Technologies in May 2023. I mainly do QA work for KINTO Unlimited . About Today's Theme Today, I will talk about the quality improvement initiative that I have been working on since I joined the company. The Quality Assurance Group and Its Future Prospects The figure above shows the future prospects of the Quality Assurance Group as given by our manager zume during a company announcement. It shows that QA was formed three years ago and the team is now stable, so what will we do next? I joined the company around that time. At the system development company where I previously worked, I worked in QA with the goal of contributing to quality improvement by providing high-quality tests, but as I gained experience, I encountered the difficulty of ensuring quality through testing. There is a limit to what you can do by just fixing bugs. You have to make sure that there are no bugs in the first place. I wanted to work somewhere I could do this, so I joined KINTO Technologies. Understanding the Current Situation After I joined the company, I first tried to understand the in-house development and QA processes and the current state of the project I was in charge of ( KINTO Unlimited ). I found out that waiting time in development was costly (My previous job also demanded speed, but not as much as KINTO Technologies). That was because KINTO and KINTO Technologies are businesses that create new services, and delays lead to lost business opportunities. It made sense. Therefore, when taking measures to improve quality, I felt that it was necessary to do something that did not slow down development. Also, I knew from my previous job that the manager could not handle everything for long, so I decided to do things as simply as possible. However, I realized that the current project lacked an objective indicator of current quality. If you can't see the quality of the products you’re making, you can't see what you need to improve so you don't make bugs, so we first made a system that would visually indicate quality. What We Did However, even though we went, "Let's do ___ to improve our quality!" it was hard to casually accept that. So, we first conducted a quality analysis in a slightly more in-depth manner through the development project, and fed the analysis results back to the project. At that time, the bug report did not have enough quality analysis items, so the QA team investigated the resolution history of each bug report, categorized them, and did a quality analysis. Analysis Materials (Excerpt) At the end of the project, we presented analysis results at a review session and said, "We can see the quality and current issues if we have these classification items in the bug report!" By narrowing down the classification items and simplifying the input methods, we also minimized the burden on the person in charge. Newly Added Classification Items Thankfully, this proposal was accepted, and we started using the bug report with the above quality classification items added. What I Want To Do in the Future Now that we have a starting point for quality improvement, we would like to give feedback regularly to the project of the information we get and consider quality issues that need to be solved together with development and QA. We also plan to continue these efforts to make the progress in quality (improvements) visible. We also aim to expand this trend horizontally to other projects and hope it will be widespread. We will continue with the belief that it will lead to the Quality Assurance Group's future prospects and overall quality improvement, which I mentioned at the beginning.
アバター
Introduction I am oshima from the Quality Assurance Group. I am an old man born in Kansai and a Hanshin Tigers fan. I feel like 38 years ago was not so long ago. Although I do not have any special skills or qualifications, I have been working in QA for more than 20 years. I am currently in charge of QA for page designs and functionality improvements of services that have been released such as KINTO ONE, KINTO ONE (Used Vehicles), KINTO FACTORY, and Mobility Market, as well as QA for static content, such as introductory pages for new vehicle models. Today, I will talk about the difference between the concepts of verification and validation, which are relatively old-fashioned in the QA area, explain the projects I am currently in charge of, and describe the focus points and issues (directions I want to go in) I think about personally. I bring up verification and validation because I was asked in a job interview to explain the difference between the two. I had not thought about the terms or the difference between them before. That happened almost ten years ago, but it left a deep impression on me. I believe we should remember these concepts in today's QA work, which is why I am bringing them up. A Brief Explanation of the Difference Between Verification and Validation So, what is the difference between verification and validation? If you look them up on Google Translate, they come up with the same result. They may seem the same, but strictly speaking, they check different things. Verification ・ Making sure the notations and actions match specifications ・ checking that they are executed correctly Validation ・ Making sure the notations and actions align with product requirements ・ checking that their correctness For example, suppose you are testing an event announcement page (leaving aside if its appropriate or not), and you find the description, "The application deadline is November 31". Checking the specifications, the documentation also states, "The application deadline is November 31". From a verification perspective, it is correct because the page being tested matches the specifications. However, from a validation point of view, there are only 30 days in November, and it is not possible to have 31st as the deadline, so it is not indicating an existing deadline. Therefore, it will fail the test. You should not miss a date that does not exist, but it is hard for QA to decide whether there is a spelling mistake or incorrect expression if it involves a legal term in a contract or a new conceptual term for a new service, so you have to be careful. Characteristics of an Actual Case As I said earlier, at KINTO Technologies I am mainly responsible for improving our existing services and doing final reviews of static content such as those presenting events or introducing new models. In this article, I will focus on the static content which is relatively frequent and large in volume. QA for static content-related projects is slightly different from general program testing, but I am working hard on it since KINTO customers see the results firsthand. The main job in testing is to check whether the layout is broken and whether the descriptions and numbers on the page are different from the price list given by the sales department or the vehicle images given by the design department. In terms of the way I described verification and validation, checking static content is mainly verification and has a few aspects of validation. Contributions and Challenges of Verification Checking static content is not hard if you check the design and writing specifications (it requires attentiveness). The downside is that if you cannot check the essential specifications, you cannot proceed to the verification stage. The problem is that it has to be passive, and you have to make gradual confirmations such as "The XX area can be checked, YY will be updated later." You have to be flexible and monitor the progress each step. I think verification helps improve what is being tested by pointing out things that are ambiguous and not determined by the specifications before the test stage, although that may sound like a paradox. In terms of test automation, if you simply check specifications and the expected values based on them, testing becomes a simple comparison between specifications and implemented results. For that reason, differences can be inspected automatically as it can be done more accurately than inspecting by sight, and it reduces workload. However, you cannot make comparisons if the specifications or expected values have not been determined, and if you cannot detect whether a change in specifications is reflected in the implemented results, there is no point in automating the test. You have to define what is correct before testing. Just like Akinobu Okada, a former Hanshin Tigers coach, I think this is an important part of a project's success. What You Can Do With the Contributions From Validation QA cannot say anything about static content because designs and other elements are up to each of the subject matter experts in their fields. However, an engineer with knowledge on topics such as UX and design elements may be able to point out things from upstream. You can do that without thinking too hard if you look at the explanations for the new service, become familiar with the explanations, and see whether it is easy for the customer to understand and the figures correctly reflect the content of the service. I think this is related to validation because you are not making a comparison with the specifications, but seeing whether the service being provided is accurately communicated to the customer. Here is a more common example of the difference between verification and validation. I once checked an information page for a vehicle type has a link to a related article. The URL matches the specifications, but the vehicle type was different from the one in the article. This is okay in terms of verification, but wrong in terms of validation. Whether it is really correct is questionable, but in terms of whether not there is a mistake, you have to keep the quality of the service you provide the customer high. Conclusion Although I talked big, I am aware that it is hard to reflect that in reality, and it’s closer to ideals than what we encounter. However, by doing the usual work again and again, you can gradually make that dream a reality. We are looking for skilled partners to conduct QA with us, especially superstars who can easily accomplish things we have not been able to do yet. People who are neither super nor a star but are willing to do their best with us are also welcome. See the recruitment page below for more information. Thank you for reading until the end.
アバター
Introduction Hello, I am Nishida, a backend engineer at KINTO FACTORY. It has been six months since the launch of KINTO FACTORY, and I would like to talk about the problems we encountered since we started operating the service, mainly involving its release and system monitoring, and what we have learned. About KINTO FACTORY I will give a brief overview of KINTO FACTORY's service. The service allows you to update functions and items such as hardware and software that are appropriate for your vehicle. With KINTO FACTORY, you add items that you could only pick before when ordering a new vehicle, such as manufacturer options, or vehicle updates that normally require you to go to a dealer. This update service extends to, not only the KINTO lineup but to Toyota, Lexus, and GR vehicles if they are compatible. Almost half a year has passed since it launched this summer. @ card Operations There is a release every two weeks, and we build CI/CD and deploy mainly with GitHub Actions. The service is monitored mainly by the development team members, who take turns to do so, as there is no established operation team or organization. We use PagerDuty's scheduling function for the rotation. Incident detection is set up as shown in the illustration. Application logs and service monitoring information are linked to OpenSearch to determine user impact and send notifications to PagerDuty. (The image shows the result of the transmission) After that, a notification is sent to Slack, and the person on duty to monitor responds. The person in charge responds. We try to get an expert involved when necessary. In addition, the details of the response are shared at the daily scrum the next day so that the team knows. Challenges and Responses I will talk about the problems we encountered since we started operations and how we dealt with them. Preparing for the release had a large burden It took us time as we had no templates, so we had to start preparing for the release from scratch. Additionally, the task granularity was uneven and the release procedure was managed based on Confluence, so checking and modifying said procedure was complicated. ⇒ Transferring the release procedure manual to code management let us do version control, create templates, confirm differences, and reduce the burden of preparing for the release. Incidents were detected frequently The number of incidents within a few days after launch was as follows: Because the error handling design on the application side was lenient, errors were detected and treated as critical even if they did not affect the user. After we started operations, there were incidents every day, and monitoring was a heavy burden. ⇒ Because there were too many incidents to handle all at once, we optimized by using exclusion settings so that each output was reviewed if it was not urgent, and if it could not be corrected immediately, it would not be notified. Slow initial response time when responding to incidents Because the response procedure and other procedures were not organized, members interpreted things differently, and there were times when they took time. There were also members who were not used to PagerDuty and forgot to confirm status updates. .. ⇒ We improved the environment so that members could define workflows in Slack, start responding to incidents by entering a command, and handle incidents according to procedures. We also made an incident VS team structure in which every team member gathered and handled incidents as a mob while comparing each other's interpretations. By doing so, we reduced the time from when an alert is made to when someone responds down to less than one tenth of the time before. Summary We were busy dealing with a variety of problems right after the launch, but it has improved continuously little by little. Looking back, members responded differently based on their experience and knowledge, and I feel that we had a smooth start because we planned in advance what kind of operations to do. We still have room for improvement, but we will continue operations so that we can provide better services for our users! Conclusion In this article, I talked about the problems we encountered while operating the service and how we dealt with them. We hope that our experience could be of some help to you. Also, KINTO FACTORY is looking for new members, so if you are interested, please check out the following job openings! @ card @ card
アバター
Introduction Hello. Thank you for reading this! I am Nakamoto, and I work in frontend development for the KINTO FACTORY service (hereinafter FACTORY). Since I decided to participate on the Advent Calendar series, I will talk about how we implemented vehicle inspection QR code reading, which we released this year. Motivations for Adoption At FACTORY, we are currently developing a service that allows customers to ride their cars for a long time from the perspectives of "Upgrade," "Renovation," and "Personalization." FACTORY's frontend functions like an e-commerce site, allowing customers to search for products that can be installed based on their vehicle information and make applications on the Internet. Information on the vehicle they are currently driving, such as the model, model year, and grade, is important. Based on that kind of information, the service determines which products can be installed and whether or not they can be combined. In order to accurately determine such combinations and possibilities, FACTORY uses something called a chassis number. Some of you may know this, but when you look at a vehicle inspection form, you can find a serial number that is a chassis number. (The third field from the top left in the figure below) *Source: Ministry of Land, Infrastructure, Transport and Tourism "Vehicle Inspection Certificate (Vehicle Inspection Certificate Sample)" ( https://wwwtb.mlit.go.jp/hokkaido/content/000176963.pdf ) With FACTORY, you can easily search for products that can be attached to your vehicle by entering its chassis number. Product listing page (Products have been searched by chassis number) Right after the release of FACTORY, the user had to enter the chassis number manually using the form in the figure below. Chassis number entry form We released the ability to search by entering a chassis number this June, but at first, I thought it would be difficult to enter unfamiliar alphanumeric characters directly into the input field. Most people access FACTORY on their smartphones, as is the case with many eCommerce websites nowadays. Typing a long string of alphanumeric characters using a smartphone can be troublesome on a small screen, and can lead to input mistakes. When this happens, it will show a message saying that the vehicle is not compatible with FACTORY, making the user unable to find the products they are looking for. I believed this could be a potential factor contributing to missed opportunities. Then I noticed the QR code in the bottom right of the vehicle inspection. Information That Can Be Read Using a Vehicle Inspection QR Code As indicated by the Ministry of Land, Infrastructure, Transport and Tourism's electronic vehicle inspection website , the chassis number used in the above search can be read using a vehicle inspection QR code. In addition, if you find a product you want to attach on FACTORY and want to buy it, you must first register your vehicle information. A chassis number and license plate information are needed to register. The vehicle inspection QR code basically contains the information written in the car inspection. We thought that by reading these, it would be easy to search for products and register a vehicle when buying. As you can see in the figure below, there are QR codes in groups or two or three, and it is necessary to have them read correctly in either case without failure. Each QR code has a separate piece of information, and they must be combined in correct order after all of the codes are read. We wanted it to give feedback on the screen so the user can see which code in each group is being read. *Source: Ministry of Land, Infrastructure, Transport and Tourism "About QR Codes" ( https://www.denshishakensho-portal.mlit.go.jp/assets/files/Two-dimensional_code_item_definition.pdf ) Implementing the Reading UI After the codes are read with a smartphone camera, they go through image processing and are analyzed and decoded with a JavaScript library. It was our first time using a camera to read and decode QR codes, so we tested it while gathering information on the Internet. We chose to use the getUserMedia() web API to read with the camera, and the js library qr-scanner to analyze QR codes. The code read by getUserMedia is copied to a canvas element, and transferred to qr-scanner , which analyzes the code and determines which code from the data was read. When testing the implementation, reading was a little difficult, and the QR code reading rate was slightly low for new vehicle inspections. We switched to electronic vehicle inspections in January. People who bought a vehicle or did a vehicle inspection already know this, but the form is considerably smaller (size B5) than the previous one (size A4). Because of this, the QR codes became a little denser and difficult to read. By looking at the qr-scanner source and adjusting the Grayscale settings, the new and old car inspection types can both be read with some accuracy. After the image data is copied to the canvas, it is converted to a black-and-white image using the above Grayscale settings and sent to the QR code analysis library. Depending on the values, the car inspection pattern was reflected as garbage, and that seemed to lead to a lower reading rate. https://github.com/nimiq/qr-scanner/blob/master/src/worker.ts#L73-L79 When implementing the UI, we wanted to show the positions of the readable QR codes by group while looking at the images captured by the camera in real time, so we arranged the necessary number of boxes at the top and displayed a check icon on the border of readable QR codes. The reading screen looked like the figure below. You can see that the positions of the codes are recognized correctly regardless of the order in which they are read. After all of the information is read, the screen transitions the same way as when you search for a chassis number manually. After Release We released the QR code search function in October. For a while after the release, it was used only for the product list page as shown in the example above, so it is hard to say that this function was used a lot, but it is now used for the home screen. By reading the QR code on the home screen, it is possible to check for compatible vehicles and product compatibility both at once, and we think it will be used more. As I mentioned in the "readable information" part, the QR codes also contain information needed for vehicle registration, so we will use this information for vehicle registration in the future and aim to make it more convenient and reduce input errors by reducing manual inputs as much as possible. Conclusion The rise of smartphones, enrichment of web content, and expansion of various web APIs have expanded the possibilities of websites. We will actively try new technologies with user convenience in mind, such as real-time OCR, NFC tags, and other contactless technologies. We will explore various ideas to make KINTO FACTORY easier and more fun to use! Lastly, KINTO FACTORY is looking for people to work with. If you are interested, check out the job openings below! @ card @ card
アバター
​ Introduction Hello! I am Aoshima from KINTO Technologies' Project Promotion Group. New variable functions have been added to Figma. Even though they are very convenient, I don't think there are many tutorials on them written in Japanese, so although it may be presumptuous, I am writing this article with the hope that it will be useful even for beginners. ​ Let's Make a Shopping Cart Mock-Up Using the Functions of Figma Variables! ![](/assets/blog/authors/aoshima/figma/image.webp =400x) Completed Shopping Cart I will explain in two parts how to create an interactive shopping cart function using Figma variables I want to start by explaining the variables in Part 1, and explain the count-up function that allows you to increase or decrease the number of products, and the ability to calculate the subtotal according to the number of products. In part 2, I will write about using functions when there are two products in the cart so you can calculate the subtotal, change free shipping settings, change total amount settings, and change settings so the wording is different when the shipping is free. ​ [Part 1] What Are Variables Part Creation First Is the Count-Up Function How To Create and Assign Variables Creating a Count-Up Function Subtotal Settings [Part 2] We Try Increasing the Number of Products to Two Subtotal Settings Free Shipping Settings Total Settings Changing the Wording for Free Shipping End [Part 1] ​ What Are Variables Variables are a new feature added to Figma in June 2023. Variables can be assigned to objects. Variables are called "hensuu" in Japanese. Variables are often described as a box that temporarily holds various information and values. Figma provides four types of variables (number, color, text, and boolean values), and these four types of information and values can be assigned to variables. However, since it seems that it is not possible to cross types, it seems that it is not possible to change a number variable to a text one partway through. Part Creation I will now start creating. I want to create these two functions. Count-up function: The number of products increases when the plus button is pressed, and decreases when the minus button is pressed Subtotal settings: The total amount changes according to the number of products First, I will teach you how to create parts and create and assign variables in order to make these functions. Create Base Parts To create the count-up function, I have created a design that resembles a coffee shop as the base of the shopping cart, and made it so that there are coffee beans in the cart. ![](/assets/blog/authors/aoshima/figma/base.png =400x) There is 1 base part product in the cart ​ Creating Button Parts Next, we want to include an action that changes the color of the button when the mouse hovers over it, so we will make a button component and create a variant. In the sample, its default state is light gray, and the mouseover state is dark gray. ![](/assets/blog/authors/aoshima/figma/button.png =400x) Create button variant ​ How to Create and Assign Variables Now that we have the necessary parts, I will explain how to create and assign variables. Variables are primarily assigned to objects that you want to change (by clicking or doing other actions). For example, you want to change the number of products when you press the plus or minus button, so you assign a variable to the number of items. ![](/assets/blog/authors/aoshima/figma/variable.png =400x) * Assign a variable to the number of items in the red rectangle* To create a variable, click Local Variables in the right column to open the window, then click the Create Variable button in the bottom left of the window. ​ ![](/assets/blog/authors/aoshima/figma/local_variable.png =400x) ![](/assets/blog/authors/aoshima/figma/all_variable.png =400x) Since we want to assign to a number, choose "number" as the type and named it "Kosu1." Since we assume that there is already one product in the cart, put 1 in the Mode 1 field. We have now created a variable. Next, we will assign this variable to an object. Select the number 1 on the design, and click the octagonal icon (above the three-dot leader) in the text information field in the right column. Select Kosu1 from the list of variables. We have now assigned a variable. ​ ![](/assets/blog/authors/aoshima/figma/kosu1.png =400x) A list of variables that can be assigned appears We assign the variable Kosu1 to the "1" between the plus and minus buttons and give it the value 1. If you change this value to 2, the number on the object will also change to 2. ​ Creating a Count-Up Function Now that we have a set of parts and variables, I will create a function that increases the number of products when the plus button is pressed. To do this, you need to assign an action to the plus button. Selecting the plus button and click "Prototype" at the top of the right column to change the mode. Then click the + sign to the right of "Interaction" and enable assigning mouse actions. ![](/assets/blog/authors/aoshima/figma/count_up1.png =400x) *Select a variable and enter a formula. * As for the type of action, I want to increase the number when I click the plus button, so I can keep the default settings for "click." Then select "Set Variable" and do the following. Select the variable you want to change when the object is clicked Enter the formula for what happens when the object is clicked When you are done, it should look like the figure. ![](/assets/blog/authors/aoshima/figma/kosu.png =400x) *Select the variable and fill in the formula. * This formula makes it so that when the plus button is clicked, Kosu1 becomes Kosu1+1. If the plus button is clicked while Kosu1 is 1, it will become 2. If you see a preview, you can see that each time you click the plus button, the number increases by one. Similarly, we will assign an action to the minus button, but we must be careful of one thing. Unlike the plus button, if you do not set any restrictions (conditions), the number will be negative. In this case, you need to set a conditional action (if statement) as shown in the figure below. ![](/assets/blog/authors/aoshima/figma/count_up3.png =400x) This formula makes it so that if ** Kosu1 is not 0, then Kosu1 becomes Kosu1-1**. For example, if you click the minus button while Kosu1 is 1, it will be 0. However, if you click the minus button while Kosu1 is 0, there will be no action, so it will remain 0. If you see a preview, you can see that the number increases when you click the plus button, and the number decreases when you click the minus button, but it does not go below zero. ​ ![](/assets/blog/authors/aoshima/figma/count_up.gif =400x) How the count-up function works The count-up function is now complete. ​ Subtotal Settings Finally, I will set up the subtotal. In Part 1, there is only one product, so we will treat the total amount as a subtotal. ​ Create and Assign a Variable The subtotal is also an object that changes according to the number of item, so you need to set a variables I made the variable the same way as before and named it Shoukei. Since we assume that the shopping cart starts with one product, the subtotal will be ¥100, and the value will be 100. We assign the variable to a number the same way as before. ​ ![](/assets/blog/authors/aoshima/figma/sum1.png =400x) Assign a variable to the amount in the red rectangle Give an Action to the Button When deriving the subtotal amount, you can also increase or decrease the price of the product by pressing the plus or minus buttons, just like with the quantity. You can also use the multiplication operator to calculate quantity x price. You can also use the same formula for both the plus and minus buttons, which makes it a little easier. When you are done, it should look like the figure. ![](/assets/blog/authors/aoshima/figma/sum2.png =400x) This formula makes it so that Shoukei is Kosu1 x 100 , so if Kosu1 is 1, the subtotal is 1x100, which is $100. If you see a preview, you can see that the number of products increases or decreases when you click the plus or minus button, and the price of the subtotal changes accordingly. ​ ![](/assets/blog/authors/aoshima/figma/sum.gif =400x) The subtotal changes Now the subtotal settings are complete. You can make a mock-up of a like button or simple cart function with just the information in Part 1, and I think it is useful for various applications. This concludes Part 1. In Part 2, we will put all this to practical use. I will explain how to make shipping free if there are two products and the amount exceeds a certain number. I hope this was helpful.
アバター
Introduction Nice to meet you. I am Shirai, working as a Cloud Infrastructure Engineer in the Cloud Infrastructure Team of the Platform Group at KINTO Technologies Corporation. I typically build and design the infrastructure for systems built on AWS. My hobby is playing table tennis and video games. Recently, I bought the remake of Super Mario RPG and played it while immersing myself in nostalgic memories. This time, I will introduce the deployment process in CloudFront Functions, which is being built in KINTO Technologies, and the story of the operational kaizen, including the background! KINTO Technologies' Cloud Infrastructure Team Prior to exploring the subject, I'd like to share a bit about our team. At KINTO Technologies, infrastructure construction is managed by IaC using Terraform. For a detailed historical background, etc., please refer to Mr. Shimakawa’s article from the same team. He has published a document on How to Abstract Terraform and Reduce the Man-hours Required to Build an Environment . Current Challenges KINTO Technologies currently uses CloudFront Functions (hereafter, CF2) for redirect processing in some systems. For more details, you can read the post from Mr. Iki, another team member, as he introduces CF2 under the title Edge Functions Available with CloudFront. While using CF2 at KINTO Technologies, the following three challenges have been raised: High communication costs between the Application Team and the Cloud Infrastructure Team The Application Team is not authorized to view logs output to CloudWatch Logs Logs output to CloudWatch Logs remain unexpired We will solve these three challenges. Digging Deeper Into Challenges 1. Challenges in high communication cost The process for CF2 to be applied so far is as follows. Deployment process to date Because the Deployment is reliant on the Cloud Infrastructure Team, in the event of an issue with CF2 source code, steps (2) to (4) in the above diagram must be re-executed by the Cloud Infrastructure Team. The problems with this flow include: The update of CF2 depends on the Cloud Infrastructure Team When CF2 is updated, the Cloud Infrastructure Team must also review the scope of impact and coordinate with the Application Team The above two points have resulted in high communication costs. 2. Challenges in Application Team not being able to view logs KTC has restricted the authority to hand over to the Application Team. As a result, they do not have permission to view CF2 logs. In this situation, the Application Team cannot investigate when a problem occurs in CF2. 3. Challenges with permanent CF2 logs At present, CF2 was built without a CloudWatch log group set up. According to the specification of CF2, a log group named /aws/cloudfront/function/${FunctionName} is automatically created in CloudWatchLogs in the us-east-1 region when CF2 are output. In this situation, the log group has no set expiration period, causing it to persist and resulting in high costs. Solutions The problems and solutions are summarized below. Issue Issue Solution 1 High communication cost Grant additional permissions to the Application Team to deploy at any time 2 Application Team not being able to view logs Add log view permission to the Application Team 3 CF2 logs remain permanent Create log group with expiration period first Now, I would like to dig deeper into each solution. Issue 1: High communication costs As mentioned above, establish a policy allowing the Application Team to deploy at any time. So, I decided to revamp the deployment process. First, I will show you an example of the configuration before CF2 was built and an example of the configuration after the process was revamped. Example of the configuration before CF2 was built Example of final configuration I would like to add more details about the DEVELOPMENT stage and LIVE stage of CF2. LIVE stage is actually CF2 running linked to CloudFront. Apart from that, the DEVELOPMENT stage is mainly used for development purposes and allows you to validate incoming requests in the LIVE stage. Next, I would like to briefly explain about the maintenance role and CICD user listed in red text. Maintenance role and CICD user Each role is as follows. Duty of the maintenance role Monitoring and updating various AWS services on the AWS Management Console. At KINTO Technologies, when logging into the AWS Management Console, SSO logins are made to an account provided for each environment. By switching to the properly authorized maintenance role after SSO login, you can view and update the necessary AWS services manually. Due to the presence of various products within the same account, we enforce restrictions on viewing and updating permissions to prevent misoperation. Duty of the CICD user Updating various AWS services using CICD tools such as Github Actions. Setting permissions to be used in deploying applications. The AWS resources used by each product determine the permissions to be granted. For example, a product that deploys Lambda and ECS has permission to deploy both, while a product that deploys ECS only has permission to deploy ECS only. Existing maintenance roles and CICD users were not granted CF2 permission, so the following permissions were added. { "Action": [ "cloudfront:UpdateFunction", "cloudfront:TestFunction", "cloudfront:PublishFunction", "cloudfront:ListFunctionTestEvent", "cloudfront:GetFunction", "cloudfront:DescribeFunction", "cloudfront:DeleteFunctionTestEvent", "cloudfront:CreateFunctionTestEvent" ], "Effect": "Allow", "Resource": "arn:aws:cloudfront::{AccountID}:function/${aws:PrincipalTag/environment}-${aws:PrincipalTag/sid}-*", "Sid": "" }, { "Action": [ "cloudfront:ListFunctions" ], "Effect": "Allow", "Resource": "*", "Sid": "" } As a side note, CF2 can submit test requests like Lambda at the DEVELOPMENT stage. Among them, *TestEvent permission was needed, but the action was not stated in the official document , so I added the necessary permissions by relying on CloudTrail. I found it to be a good example of realizing that official document isn't everything. Next, I'll talk about the division of responsibilities between the Cloud Infrastructure Team and the Application Team. The Roles of the Cloud Infrastructure Team and the Application Team Task Cloud Infrastructure Team Application Team CF2 permissions ○ - Create sample app and link to CloudFront ○ - Develop CloudFront Functions, publish to LIVE stage - ○ Operate and monitor CF2 - ○ Let's take a look at the process of actual deploying (Publishing) to the LIVE stage. Deployment process 1. The Application Team asks the Cloud Infrastructure Team to build You need to issue Jira tickets based on the following template. CF2 naming: hogehoge e.g.) redirect-cf2 List of environments to build: xxx CloudfFront ARN to associate: arn:aws:cloudfront::{AccoutID}:distribution/{DistributionID} e.g.) arn:aws:cloudfront::111111111111:distribution/EXXXXXXXXXXXXX Associate cache behaviors Viewer request Viewer response hogehoge ○ - 2. Built by the Cloud Infrastructure Team Link CF2 of the sample app (request through) created by the Cloud Infrastructure Team to CloudFront behavior. Grant the Application Team permissions for development and deployment to maintenance role and CICD user. function handler(event) { var request = event.request; return request; } *The Cloud Infrastructure Team updates and creates the necessary resources. The red frame is the target. * 3. The Application Team publishes the CF2 code to the DEVELOPMENT stage. How to update the source code to the DEVELOPMENT stage is as follows. 1. Manual execution from the AWS Management Console using maintenance role 2. Apply using CI/CD tools such as Github Actions, using CICD user credentials You can run tests on the AWS Management Console or with CI/CD tools. Development and testing 4. The Application Team publishes the CF2 code to the LIVE stage. Similar to applying to the DEVELOPMENT stage, publishing to the LIVE stage can be performed from the AWS Management Console or CI/CD tools such as Github Actions. Final Configuration Issue 2: The Application Team is not being able to view logs. Grant view permission to the log group. { "Action": [ "logs:StartQuery", "logs:GetLogGroupFields", "logs:GetLogEvents" ], "Effect": "Allow", "Resource": "arn:aws:logs:us-east-1:{AccountID}:log-group:/aws/cloudfront/function/${aws:PrincipalTag/environment}-${aws:PrincipalTag/sid}-*-cloudfront-function:log-stream:*", "Sid": "" } In the maintenance role, the logs are now visible because we have granted the log group view and log insight view permissions as described above. As a result, I believe the Application Team can now take the lead in addressing problems as they occur. Issue 3: CF2 logs remain unexpired. A CloudWatchLog group is created when CF2 is built. This was achieved by including it in a module that is referenced in the process of CF2 creation. resource "aws_cloudwatch_log_group" "this" { name = "/aws/cloudfront/function/${local.function_name}" retention_in_days = var.cwlogs_retention_in_days == null ? var.env.log_retention_in_days : var.cwlogs_retention_in_days } Summary/Conclusion Three improvement initiatives were implemented for CF2 at this time. Let me summarize it in bullet points. Issue 1: High communication costs Solution: Organize permissions and processes to enable the Application Team to deploy on their own Effect: The Application Team can now execute tasks at any time, allowing effective communication as required. Issue 2: The Application Team cannot view logs Solution: Grant the Application Team permission to view logs Effect: Even when problems occur, they can check the logs and respond by themselves Issue 3: CF2 logs remain permanent Solution: Create destination log group with expiration period first Effect: The log validity period was determined, which contributed to cost optimization. Thank you for reading my article all the way to the end!
アバター
Outline in 3 Lines I am a Scrum Master We use Atlassian products, including Jira in our company Here are some useful features of Jira Introduction Hello, everyone. This is Koyama from KINTO Technologies. I am an iOS (Swift) engineer. I've also been doing a bit of Scrum Master work lately. This time, I would like to introduce some recommended Jira practices. Scrum and Jira? As stated in the 2020 Scrum Guide , a Scrum Master has a key role to “help the Scrum team by coaching them self-management and cross-functionality”. I anticipate the creation of various mechanisms to fulfill this, but at this time, it's important to have a system that operates with minimal human (Scrum Master) intervention. … …No, I don't mean I just want to make things easier. …Okay, maybe a bit (We all want to work with less effort, though.) So, What Was Jira Again? Jira is a SaaS product developed by Atlassian primarily used for project management. For more information, please visit the official website . It has a very versatile task management function and is highly compatible with Confluence, a document management tool also made by Atlassian. Our company provides both Jira and Confluence, which are available to all employees. I wonder how widespread it is. Getting to the Point (Finally) Creating fields and setting required fields "The person in charge didn't write down the issue in detail!" "I thought I'd add the deadline later, but I forgot..." We hear voices like these. The amount of information that remains on each issue in project management using Jira will help you and the person in charge around you understand quickly, preventing the need to explain the same information over and over again. In addition to the fields provided in the template, Jira allows you to add and customize any field and even make it mandatory at will. I think this can be used to structure and resolve the above comments. "The person in charge didn't write down the issue in detail!" Let's include additional fields to provide more detailed information. "I thought I'd add the deadline later, but I forgot..." Let’s make sure that the deadline field is a mandatory requirement. Use cases Our team makes it mandatory to enter labels to clarify which team is responsible; such as "iOS", "Android", "Backend", etc. While simply incorporating suitable labels would meet the need for the required field, it would be adequate to prevent the person in charge from unintentionally omitting it during the ticket creation process We made the "label" field mandatory. To set up, simply click the "Required" checkbox. It's that simple. However, if you make all of them mandatory, creating an issue will be troublesome, so please set the usage guidelines after discussing with your team. Automation "It's frustrating to enter the same values every time I create an issue." "I want to regularly check that the issue is operating correctly." Next, let's address this kind of feedback. Jira has a function called Automation, which I avoided when I first started using it because I didn't know what it was. However, I found it to be really convenient when I gave it a try. This can solve most of the problems that can be done in Jira. Let's take a look at how to set them up in order. "It's frustrating to enter the same values every time I create an issue." Create an action with the trigger set to "Issue Created." In our project, we automatically flag "Issue Created" after determining whether there are any related issues. Here are the settings for your reference. Set "Issue Created" as the trigger for the rule. Determine if there are "story" or "bug" in linked issues Jira allows you to narrow down not only whether there are linked issues, but also the type of issues to which they are linked. We use these to support issue operation in line with the team's approach. "I want to regularly check that the issue is operating correctly." For this, how about setting the trigger to "every day at midnight" or "every Monday at 13:00"? In our project, a system has been created for regular checks to ensure no oversight of time-limited issues. Here are some examples. Set "9:00 a.m. on weekday morning" as a trigger for rule Check that the limit is expired. Send a message using Slack Webhook Since Jira can also send Slack notifications, we can leave it up to our now familiar Slack to notify us about important messages. "I didn't notice the Jira comment!" You can also avoid situations where you might miss comments. JQL? The term "JQL", which frequently appears in automation settings, is unfamiliar, right? If you are only using Jira as a task management tool, you may never use JQL, but from a management perspective, it allows detailed configuration. "JQL" stands for "Jira Query Language" and is a unique language for searching for specific issues in Jira. I myself had difficulty finding the information I needed on the web, so here I would like to introduce JQL and how I use it myself for automation. Narrow down projects and issue statuses All Jira projects are in scope by default in regular executions. If you set up an automation workflow without any scope, you may unintentionally manipulate the issues of other teams. It can happen. Our project has the following conditions for automation to be performed on a schedule. The project must be yours The status must be "Done" or "DROP" project = PP20 AND status not in (Done, "DROP") Note: In the PP20 part, enter the key set for the project. Insert the content of the issue into the message you send to Slack As mentioned in Automation, it is possible to connect to Slack, but without information on the issue, the person in charge who is contacted on Slack will be confused. In our project, we create Slack messages with the following settings. *There are overdue tickets. Please review the contents asap. * @channel > 【<https://hogehoge.atlassian.net/browse/{{issue.key}}|{{issue.key}}>】{{issue.summary}} The message changes to something like this. Actual Slack message Conclusion This was the introduction of Jira's useful settings. I hope this provides a bit of help for both current Jira users and those considering Jira as their management tool. In order to make Agile development easier and more progressive, I look forward to leveraging the useful features of Jira.
アバター
Hello Hello, I'm Maya from the Tech Blog team at KINTO Technologies! I interviewed those who joined us in October 2023 about their immediate impressions of the company and summarized them in this article. I hope this content will be useful for those who are interested in KINTO Technologies, and serve as a reflection for the members who participated in the interview. IU Self-introduction I am IU from KINTO ONE New Vehicle Subscription Development Group. I am in charge of front-end development such as membership screens and contract simulation screens for KINTO ONE. How is your team structured? We are basically a front-end team, with six people. I am participating with Mr. Shirahama, who joined the company at the same time. What was your first impression of KINTO Technologies when you joined? Were there any surprises? Before joining the company, I anticipated it to be somewhat old-fashioned and rigid, given it is a subsidiary of a major corporation. However, I've found the working atmosphere to be more relaxed than expected. In particular, I feel a strong sense of speed, with services rapidly improving in short cycles from idea planning to implementation and release in a short period of time. What is the atmosphere like on site? Even though each person concentrates on their individual tasks and often works independently, we come together at meetings to share updates on the tasks each person is handling. When faced with challenges, we engage in group discussions to find solutions, and we conduct periodic reading sessions to enhance our knowledge. How did you feel about writing a blog post? I knew about the Tech Blog before I joined the company, but I thought that only a limited number of people could write for it. However, the environment was inclusive, and everyone was not only welcome but also encouraged to write. I enjoy writing articles, so I would like to actively participate in Tech Blog. Jongseok Bae Self-introduction My name is Jongseok Bae from South Korea who joined the company in October. I am an Android developer in the Prism Team at the Mobile App Development Group in the Platform Development Division. How is your team structured? The Prism team manages schedules and meetings using the Agile framework. What was your first impression of KINTO Technologies when you joined? Were there any gaps? At the beginning, I felt that the company was well explained through on-the-job training and other activities. I thought it was different from my previous experiences where companies typically provided only a brief explanation for about a day. I also felt that many in-house study sessions are held and it is really nice to catch up on information that might otherwise be easily missed. What is the atmosphere like on site? The team members are kind. I found it to be a good work environment where communication with others allowed me to ask questions, learn, and share opinions in the course of work. How did you feel about writing a blog post? It was burdensome at first, but as I reflected on what I felt over the month, I realized it's a great way to organize my thoughts. Martin Hewitt Self-introduction I'm Martin from France. I'm involved in Platform Engineering at KINTO Technologies. How is your team structured? The Platform Group consists of six teams, each specializing in a particular area of expertise, including SRE, DBRE, Platform Engineering, and Cloud. What was your first impression of KINTO Technologies when you joined? Were there any gaps? They faithfully introduced us to the company! This never happens in France. I felt that it was a modern company, which is different from the image I had of Japanese companies. What is the atmosphere like on site? Everyone is so kind! I was nervous at first, but I quickly got used to it. How did you feel about writing a blog post? How fast! U.A Self-introduction I am U.A from KINTO ONE Development Group. As a Producer and PdM (Product Manager), I am in charge of supporting the digital transformation of Toyota dealers. How is your team structured? I belong to the Digital Transformation Planning team within the Owned Media & Incubation Development Group. Our team directly visits Toyota dealers to ask about their problems and solve them with the power of IT. The team consists of six members in total: two producers, three directors, and one designer. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I was surprised by everything: freedom to dress casually, hair color freely, and flexible work hours. I feel like I am in an enriched space, drawing on my past experiences while collaborating with specialists from diverse fields in each department. So, the more people I get to know, the more I can stretch my abilities. What is the atmosphere like on site? The Digital Transformation Planning team is full of personalities and never runs out of topics to discuss. I feel how important communication is every day, as I often come up with hints for digital transformation from everyday conversations. How did you feel about writing a blog post? I am surprised that it has already been a month since I joined the company! I will continue to cherish each day. Ahsan Rasel Selp-introduction My name is Rasel from the Mobile App Development Group, Platform Development Division. I am from Bangladesh. I support the Android version of my route by KINTO app. How is your team structured? We are a 4-member multinational team, including myself. We are from Japan, Bangladesh and South Korea. Our team uses the Agile methodology for our workflows. What was your first impression of KINTO Technologies when you joined? Were there any surprises? At orientation, I was able to learn in detail about the company's structure, mission, and vision of all divisions. I had an orientation in my previous job, but it wasn’t as detailed. I found it much easier to get my opinions across to the CIO/CEO. What is the atmosphere like on site? Everyone is very kind and easy to work with. I had a lot of questions after I joined, but I'm grateful that everyone was happy to explain in detail. In case of any difficulties in Japanese, I can switch to English for smoother communication, which I find to be a good thing. How did you feel about writing a blog post? It feels fast. I have written on technical blogs before, but this is my first time to write a non-technical blog post. —a surprising task, especially considering how recent my joining of the company has been. Yuhei Miyazawa Selp-introduction I am Miyazawa from Operation System Development Group, Platform Development Division. In my previous job, I was developing e-commerce sites on the vendor side (Systems Integrator). Currently, I am developing a system to handle back office operations related to KINTO ONE used vehicles. How is your team structured? Development is being promoted by an in-house team of 5 people and approx. 20 vendors. In order to promote in-house production, many people with high technical expertise are employed, and here you will never find the mentality of "professionals only promote and rely on vendors for technology." What was your first impression of KINTO Technologies when you joined? Were there any surprises? Freedom and discretion in the way you work. Good communication culture with respect for others! What is the atmosphere like on site? It is not noisy, but not too quiet. It is a peaceful atmosphere. The team relationships are so good that if someone suggests, "Let's have a drink at that restaurant," it will happen. How did you feel about writing a blog post? I was relieved that the content was about self-introduction and company introduction. Ryomm Selp-introduction My name is Matsusaka / Ryomm (@ioco95) from the Mobile App Development Group, Platform Development Division. I am in the team that takes care of the iOS version of the my route by KINTO app. How is your team structured? The iOS development team of my route consists of six people, including myself. What was your first impression of KINTO Technologies when you joined? Were there any surprises? When I joined the company, my first impression was that it was rather conservative, but when I proposed ideas of what I wanted to do I found support from people around, and I’m now I find myself able to spend my time flexibly. What is the atmosphere like on site? There is plenty of work time, allowing me to work silently and concentrate on what I have to do. It feels refreshing to be able to come and leave at any time I want due to the full flex time Some of us start working at 5:00 a.m. on days when we work from home, and I truly feel a sense of freedom. How did you feel about writing a blog post? It's refreshing that the blog creates articles on GitHub. Pauline Ohlson Selp-introduction Hello! My name is Pauline Ohlson. Hello! My name is Pauline Ohlson. Starting October, I was assigned as an Android engineer to the Mobile Development Group in the Platform Development Division. How is your team structured? I am working in the Osaka office where my seating placement is together with the iOS engineers working in Osaka. Many of my Android project colleagues work in Tokyo so I collaborate with them from Osaka. What was your first impression of KINTO Technologies when you joined? Were there any gaps? My first impression was that KINTO’s history is interesting, and the future ambitions are inspiring to work for so I am very excited to join the company. I was also excited about the many company initiatives to use the latest tools and technologies. At KTC there are more occasions than I expected to get along with everyone. I was also happy to have the chance to talk directly to the CIO and the CEO. What is the atmosphere like on site? Everyone is very nice and works with passion while keeping a little bit of playfulness at the same time. Everyone is also very considerate towards each other which makes it easy to work effectively. How did you feel about writing a blog post? This is my first time to write a blog in this context; I think it is a cool and very fun idea! Hiroki Shirahama Self-introduction I am Shirahama from New Vehicle Subscription Development Group, KINTO ONE Development Division. I am in charge of the front end of KINTO ONE. How is your team structured? We have six team members including myself and IU, who both joined in October. What was your first impression of KINTO Technologies when you joined? Were there any gaps? I found it incredibly refreshing to have the freedom to choose my work hours with full flexibility. What is the atmosphere like on site? It's like silently moving forward with the task for which you are responsible. Since there are daily work reports and weekly reviews, I think it is an environment that makes it easy to understand what team members are working on and to consult about their tasks. How did you feel about writing a blog post? I have been curious about it, but I never thought I would write it so soon. Conclusion Despite being a request on short notice to write for the Tech Blog, thank you all for your willingness to share your impressions of immediately after joining the company! I hope that this article captures a new aspect of KINTO Technologies. I look forward to many more interesting contents from you in the future! :)
アバター
AWS CloudTrailに大量のNotFoundエラーイベントが出てるんですけど!? こんにちは。(今更)酒癖50を観てもお酒を嫌いになれなかったKINTO テクノロジーズCCoEチーム所属の栗原です。以前に同じチームの多田から KINTOテクノロジーズにおけるCCoEの活動内容 を紹介しましたが、クラウド環境をセキュアに保てるよう日々活動しています。AWSアカウントの健全性を確認するためAWS CloudTrailのログを分析していたところ、大量のNotFound系のエラーが定期的に発生していたことに気がつきました。地味な話になりますが、AWSを利用しているユーザーであれば同じ事象に遭遇しているはずなのにググってもヒットしなかったので調査内容をブログにしてみました。 結論 結論から言いますと、 AWS CloudTrailの分析時には、AWS Configレコーダーのサービスリンクロール経由のNot Found系エラーは除外して分析するべき。 になります。AWS Configの挙動上、どうしても発生してしまうエラーイベントが存在するので、適切にフィルタリングして分析ノイズを減らすことが可能です。 調査内容 KINTO テクノロジーズでは、 AWS マルチアカウント管理を実現するベストプラクティス に則り、AWS Control TowerでLanding Zoneを管理するマルチアカウント構成をとっています。そのため AWS Configで構成情報を、AWS CloudTrailで監査ログを管理しています。 AWSアカウントの健全性を確認するためAWS CloudTrailのログを分析していたところ、NotFound系のエラーイベントが大量かつ定期的に発生していることがわかりました。 とあるAWSアカウントの1ヶ月程度のCloudTrailログのAWS Athenaでの分析結果がこちらです。このアカウントは発行して最低限のセキュリティ設定を施したのみで、ワークロードは構築していないアカウントとなります。 -- errorCodeの上位を分析 WITH filterd AS ( SELECT * FROM cloudtrail_logs WHERE errorCode IS NOT NULL ) SELECT errorCode, count(errorcode) as eventCount, count(errorCode) * 100 / (select count(*) from filterd) as errorRate FROM filterd GROUP BY errorCode eventCount errorRate ResourceNotFoundException 1,515 18 ReplicationConfigurationNotFoundError 1,112 13 ObjectLockConfigurationNotFoundError 958 11 NoSuchWebsiteConfiguration 954 11 NoSuchCORSConfiguration 952 11 InvalidRequestException 627 7 Client.RequestLimitExceeded 609 7 -- 特定のerroCodeの発生頻度を確認 SELECT date(from_iso8601_timestamp(eventtime)) as "date" count(*) as count FROM cloudtrail_logs WHERE errorcode = 'ResourceNotFoundException' GROUP BY date(from_iso8601_timestamp(eventtime)) ORDER BY "date" ASC LIMIT 5 date count 2023-10-19 52 2023-10-20 80 2023-10-21 80 2023-10-22 80 2023-10-23 80 いくつかのerrorCodeをピックアップして、AWS CloudTrailのレコードを眺めると(実際のAWS CloudTrailログは記事の最後に記載します。)、アクセス元であるuserIdentityのarnフィールドに記録されているのは全て arn:aws:sts::${AWS_ACCOUNT_ID}:assumed-role/AWSServiceRoleForConfig/${SESSION_NAME} となっていました。これはAWS Configにアタッチされる サービスリンクロール です。対象リソースは存在するのにNotFoundになる理由がわからなかったのですが、 eventName の箇所を確認すると、リソース本体の構成情報を取得するAPIではなく、それぞれの従属するリソースの情報を取得するAPIであることがわかりました。 リソース errorCode 呼ばれていたAPI(eventName) Lambda ResourceNotFoundException GetPolicy20150331v2 S3 ReplicationConfigurationNotFoundError GetBucketReplication S3 NoSuchCORSConfiguration GetBucketCors ワークロードに影響があるエラーではないですが、通常の監視やトラブルシューティングのノイズになるため解消していきたいところですが、そのためには"関連リソースになにかしらの設定をする"(例えばLambdaのリソースベースポリシーに、自身のアカウントからのみInvokeFunctionのActionを許可する)といった、本質的ではない対応をする必要があります。 結果として、我々CCoEチームではAWS CloudTrailの分析時にAWS Configのサービスリンクロールからのアクセスは除外する。という対応する結論にいたりました。AWS Athenaで分析するのであれば以下の様なクエリを実行するイメージです。 SELECT * FROM cloudtrail_logs WHERE userIdentity.arn not like '%AWSServiceRoleForConfig%' 少しだけDeep Dive 本調査の過程でわかったAWS Configの構成情報の記録の挙動を少しDeep Diveします。公式ドキュメントにも明文化されていないが、本調査でわかったことが2点あります。 従属(補足)リソース(勝手に命名しました。)の記録の挙動 従属(補足)リソースの記録頻度 従属(補足)リソースの記録の挙動 AWS Configはリソース本体の構成情報を記録するだけでなく、関連リソース(relationship)も合わせて記録してくれる挙動があります。これらには、 「直接的な」関係 、 「関節的な」関係 と名前がつけられています。 AWS Config は、設定フィールドからほとんどのリソースタイプの関係を導き出します。これを「直接的な」関係と呼びます。直接的な関係は、リソース (A) と別のリソース (B) との間の一方向関係 (A→B) であり、通常、リソース (A) の Describe API レスポンスから取得されます。以前は、AWS Config が当初サポートしていた一部のリソースタイプについて、他のリソースの設定から関係もキャプチャし、双方向 (B→A) の「間接的な」関係を作成していました。例えば、Amazon EC2 インスタンスとそのセキュリティグループの関係は直接的です。セキュリティグループは Amazon EC2 インスタンスの Describe API レスポンスに含まれるためです。一方、セキュリティグループと Amazon EC2 インスタンスの関係は間接的です。セキュリティグループを記述しても、関連付けられているインスタンスに関する情報は返されないためです。その結果、リソース設定の変更が検出されると、AWS Configはそのリソースの CI を作成するだけでなく、間接的な関係を持つリソースを含む関連リソースの CI も生成します。例えば、Amazon EC2AWS Config インスタンスの変更を検出すると、そのインスタンスの CI と、そのインスタンスに関連付けられているセキュリティグループの CI が作成されます。 -- https://docs.aws.amazon.com/ja_jp/config/latest/developerguide/faq.html#faq-1 従属(補足)リソース と勝手に命名していますが、関連リソースとはまた別に、リソース本体の設定であるように見えるものの、取得APIも分かれているようなリソースがあります。Lambdaのケースでいうと、Lambda自体は GetFunction で取得できるリソースですが、 リソースベースポリシー はまた別のリソースで、 GetPolicy で取得できるリソースです。CI(Configuration Item)をみてみると、従属(補足)リソースであるリソースベースポリシーは以下の様に、 supplementaryConfiguration フィールドに記録されます。 { "version": "1.3", "accountId": "<$AWS_ACCOUNT_ID>", "configurationItemCaptureTime": "2023-12-15T09:52:19.238Z", "configurationItemStatus": "OK", "configurationStateId": "************", "configurationItemMD5Hash": "", "arn": "arn:aws:lambda:ap-northeast-1:<$AWS_ACCOUNT_ID>:function:check-config-behavior", "resourceType": "AWS::Lambda::Function", "resourceId": "check-config-behavior", "resourceName": "check-config-behavior", "awsRegion": "ap-northeast-1", "availabilityZone": "Not Applicable", "tags": { "Purpose": "investigate" }, "relatedEvents": [], # 関連リソース "relationships": [ { "resourceType": "AWS::IAM::Role", "resourceName": "check-config-behavior-role-nkmqq3sh", "relationshipName": "Is associated with " } ], ... 中略 # 従属(補足)リソース "supplementaryConfiguration": { "Policy": "{\"Version\":\"2012-10-17\",\"Id\":\"default\",\"Statement\":[{\"Sid\":\"test-poilcy\",\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"arn:aws:iam::<$AWS_ACCOUNT_ID>:root\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws:lambda:ap-northeast-1:<$AWS_ACCOUNT_ID>:function:check-config-behavior\"}]}", "Tags": { "Purpose": "investigate" } } } 従属(補足)リソースの記録頻度 AWS ConfigのCIの記録頻度は、 RecordingMode の設定に従いますが、従属(補足)リソースについてはその限りではないようです。NotFound系だった場合リトライしている可能性もありそうですが、12時間や24時間に1回記録を試みているような動作になっていました。これも従属(補足)リソースの種類によって規則性があるわけではないようです。なかなかにブラックボックスな挙動ですがこの様な調査結果となりました。 まとめ 以上、AWS CloudTrailに出力されている謎のNotFound系エラーイベントの正体と、対策について紹介しました。今後詳細を調査予定ですが、Macieのサービスリンクロールからも同じ様なエラーイベントが発生していることが確認できています。AWS CloudTrailの分析は退屈な作業ではありますが、AWSサービスの挙動を深く理解できる機会にもなるので、積極的に実施していきましょう!AWSを使い倒したいエンジニアの方、小出恵介さんってやっぱいい俳優だよね!という方、 プラットフォームG で絶賛採用募集中です! 最後にそれぞれのAWS CloudTrailエラーイベントを記載して終わりにします。ご拝読ありがとうございました。 Lambda: ResourceNotFoundException { "eventVersion": "1.08", "userIdentity": { "type": "AssumedRole", "principalId": "************:LambdaDescribeHandlerSession", "arn": "arn:aws:sts::<$AWS_ACCOUNT_ID>:assumed-role/AWSServiceRoleForConfig/LambdaDescribeHandlerSession", "accountId": "<$AWS_ACCOUNT_ID>", "accessKeyId": "*********", "sessionContext": { "sessionIssuer": { "type": "Role", "principalId": "*********", "arn": "arn:aws:iam::<$AWS_ACCOUNT_ID>:role/aws-service-role/config.amazonaws.com/AWSServiceRoleForConfig", "accountId": "<$AWS_ACCOUNT_ID>", "userName": "AWSServiceRoleForConfig" }, "webIdFederationData": {}, "attributes": { "creationDate": "2023-12-03T09:09:17Z", "mfaAuthenticated": "false" } }, "invokedBy": "config.amazonaws.com" }, "eventTime": "2023-12-03T09:09:19Z", "eventSource": "lambda.amazonaws.com", "eventName": "GetPolicy20150331v2", "awsRegion": "ap-northeast-1", "sourceIPAddress": "config.amazonaws.com", "userAgent": "config.amazonaws.com", "errorCode": "ResourceNotFoundException", "errorMessage": "The resource you requested does not exist.", "requestParameters": { "functionName": "**************" }, "responseElements": null, "requestID": "******************", "eventID": "******************", "readOnly": true, "eventType": "AwsApiCall", "managementEvent": true, "recipientAccountId": "<$AWS_ACCOUNT_ID>", "eventCategory": "Management" } S3: ReplicationConfigurationNotFoundError { "eventVersion": "1.09", "userIdentity": { "type": "AssumedRole", "principalId": "**********:AWSConfig-Describe", "arn": "arn:aws:sts::<$AWS_ACCOUNT_ID>:assumed-role/AWSServiceRoleForConfig/AWSConfig-Describe", "accountId": "<$AWS_ACCOUNT_ID>", "accessKeyId": "*************", "sessionContext": { "sessionIssuer": { "type": "Role", "principalId": "*************", "arn": "arn:aws:iam::<$AWS_ACCOUNT_ID>:role/aws-service-role/config.amazonaws.com/AWSServiceRoleForConfig", "accountId": "<$AWS_ACCOUNT_ID>", "userName": "AWSServiceRoleForConfig" }, "attributes": { "creationDate": "2023-12-03T13:09:16Z", "mfaAuthenticated": "false" } }, "invokedBy": "config.amazonaws.com" }, "eventTime": "2023-12-03T13:09:55Z", "eventSource": "s3.amazonaws.com", "eventName": "GetBucketReplication", "awsRegion": "ap-northeast-1", "sourceIPAddress": "config.amazonaws.com", "userAgent": "config.amazonaws.com", "errorCode": "ReplicationConfigurationNotFoundError", "errorMessage": "The replication configuration was not found", "requestParameters": { "replication": "", "bucketName": "*********", "Host": "*************" }, "responseElements": null, "additionalEventData": { "SignatureVersion": "SigV4", "CipherSuite": "ECDHE-RSA-AES128-GCM-SHA256", "bytesTransferredIn": 0, "AuthenticationMethod": "AuthHeader", "x-amz-id-2": "**************", "bytesTransferredOut": 338 }, "requestID": "**********", "eventID": "*************", "readOnly": true, "resources": [ { "accountId": "<$AWS_ACCOUNT_ID>", "type": "AWS::S3::Bucket", "ARN": "arn:aws:s3:::***********" } ], "eventType": "AwsApiCall", "managementEvent": true, "recipientAccountId": "<$AWS_ACCOUNT_ID>", "vpcEndpointId": "vpce-***********", "eventCategory": "Management" } S3: NoSuchCORSConfiguration { "eventVersion": "1.09", "userIdentity": { "type": "AssumedRole", "principalId": "***********:AWSConfig-Describe", "arn": "arn:aws:sts::<$AWS_ACCOUNT_ID>:assumed-role/AWSServiceRoleForConfig/AWSConfig-Describe", "accountId": "<$AWS_ACCOUNT_ID>", "accessKeyId": "***************", "sessionContext": { "sessionIssuer": { "type": "Role", "principalId": "*************", "arn": "arn:aws:iam::<$AWS_ACCOUNT_ID>:role/aws-service-role/config.amazonaws.com/AWSServiceRoleForConfig", "accountId": "<$AWS_ACCOUNT_ID>", "userName": "AWSServiceRoleForConfig" }, "attributes": { "creationDate": "2023-12-03T13:09:16Z", "mfaAuthenticated": "false" } }, "invokedBy": "config.amazonaws.com" }, "eventTime": "2023-12-03T13:09:55Z", "eventSource": "s3.amazonaws.com", "eventName": "GetBucketCors", "awsRegion": "ap-northeast-1", "sourceIPAddress": "config.amazonaws.com", "userAgent": "config.amazonaws.com", "errorCode": "NoSuchCORSConfiguration", "errorMessage": "The CORS configuration does not exist", "requestParameters": { "bucketName": "********", "Host": "*************************8", "cors": "" }, "responseElements": null, "additionalEventData": { "SignatureVersion": "SigV4", "CipherSuite": "ECDHE-RSA-AES128-GCM-SHA256", "bytesTransferredIn": 0, "AuthenticationMethod": "AuthHeader", "x-amz-id-2": "*********************", "bytesTransferredOut": 339 }, "requestID": "***********", "eventID": "*****************", "readOnly": true, "resources": [ { "accountId": "<$AWS_ACCOUNT_ID>", "type": "AWS::S3::Bucket", "ARN": "arn:aws:s3:::*************" } ], "eventType": "AwsApiCall", "managementEvent": true, "recipientAccountId": "<$AWS_ACCOUNT_ID>", "vpcEndpointId": "vpce-********", "eventCategory": "Management" }
アバター
Spring Boot 2 to 3 Upgrade: Procedure, Challenges, and Solutions Introduction Hello. I am Takehana from the Payment Platform Team / Common Service Development Group [^1][^2][^3][^4] / Platform Development Division. This article covers the latest Spring Boot update which we use for payment platform APIs and batches. Challenges to Solve and Goals I Wanted to Achieve I am using Spring Boot 2, and I want to upgrade to 3 in consideration of the support period and other factors. The version of the library used was also upgraded Library Before migration (2) After migration (3) Java 17 No changes MySQL 5.7 8.0 Spring Boot 2.5.12 3.1.0 Spring Boot Security 2.5.12 3.1.0 Spring Boot Data JPA 2.5.12 3.1.0 hibernate Types 2.21.1 3.5.0 MyBatis Spring Boot 2.2.0 3.0.2 Spring Batch 4.3 5.0 Spring Boot Batch 2.5.2 3.0.11 Spring Boot Cloud AWS 2.4.4 3.0.1 Trial and Error and Measures Taken Method of Application We first updated and deprecated libraries that have little impact with existing code while referring to the official migration guide. After that, we updated to 3.1.0 and continued fixing, building, testing, and adjusting. Spring Boot 3.1 Release Notes Spring Boot 3.0 Migration Guide Spring Batch 5.0 Migration Guide javax → Jakarta We changed packages from javax , which affected many files, to Jakarta . The name after the package root did not change, so we replaced it mechanically. Around DB access MySQL-Connector-Java I changed to mysql-connector-j because it was migrated. Maven Repository MySQLDialect Using org.hibernate.dialect.MySQLDialect allows it to absorb MySQL version differences. Hibernate-Types The method of setting the Json type used with JPA Entity has changed with the upgrade. Change ID generate to IDENTITY The automatic numbering method has changed in Spring DATA JPA, and it now requires the table XXX_seq when using AUTO. Since our system used MySQL's Auto Increment, we decided not to use the numbering feature with JPA. Spring Batch Modifying the Meta Table The structure of the Spring Batch management table changed. The existing table was changed using the migration guide as a reference and using the following. /org/springframework/batch/core/migration/5.0/migration-mysql.sql However, just executing the ALTER TABLE caused an error during operation due to existing data, so we decided to restore the data to its initial state after confirming that it would not affect future operation. Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: LONG at org.springframework.batch.core.repository.dao.JdbcJobExecutionDao$2.processRow(JdbcJobExecutionDao.java:468) ... (The PARAMETER_TYPE of BATCH_JOB_EXECUTION_PARAMS contained a LONG value) The data was restored to its initial state with the following SQL. TRUNCATE TABLE BATCH_STEP_EXECUTION_CONTEXT; TRUNCATE TABLE BATCH_STEP_EXECUTION_SEQ; TRUNCATE TABLE BATCH_JOB_SEQ; TRUNCATE TABLE BATCH_JOB_EXECUTION_SEQ; TRUNCATE TABLE BATCH_JOB_EXECUTION_PARAMS; TRUNCATE TABLE BATCH_JOB_EXECUTION_CONTEXT; SET foreign_key_checks = 0; TRUNCATE TABLE BATCH_JOB_EXECUTION; TRUNCATE TABLE BATCH_JOB_INSTANCE; TRUNCATE TABLE BATCH_STEP_EXECUTION; SET foreign_key_checks = 1; INSERT INTO BATCH_STEP_EXECUTION_SEQ VALUES(0, '0'); INSERT INTO BATCH_JOB_EXECUTION_SEQ VALUES(0, '0'); INSERT INTO BATCH_JOB_SEQ values(0, '0'); BasicBatchConfigurer can no longer be used The method of use was changed to DefaultBatchConfiguration . StepBuilderFactory and JobBuilderFactory were deprecated JobRepository and TransactionManager are now passed with new StepBuilder() . The argument type of ItemWriter changed The write process was fixed after List changed to org.springframework.batch.item.Chunk . Before correction After correction ItemWriter<Dto> write() { return items -> { ... items.stream() .flatMap(dto -> dto.getDatas().stream()) .forEach(repository::update); ... ItemWriter<Dto> write() { return items -> { ... items.getItems().stream() .flatMap(dto -> dto.getDatas().stream()) .forEach(repository::update); ... The behavior of @EnableBatchProcessing changed When checking operations, the process was skipped with chunk model batches. The behavior of @EnableBatchProcessing changed. Spring Cloud AWS Library changes This system uses many AWS services, and Spring Cloud AWS was used to link them. With the update, io.awspring.cloud:spring-cloud-starter-aws was changed to io.awspring.cloud:spring-cloud-aws-starter (confusing), and com.amazonaws:aws-java-sdk was replaced with software.amazon.awssdk and fixed so it would operate. Spring Cloud AWS SES Regarding SES, because AmazonSimpleEmailService is no longer usable, we switched to JavaMailSender for the implementation. The JavaMailSender used is built with SES Auto Configuration and used with DI. SQS Objects for requests such as sending to SQS are made with the Builder pattern, so we fixed them accordingly. In addition, @NotificationMessage used in SQSListener is gone, so we created SqsListenerConfigurer and prepared MessageConverter. @Bean public SqsListenerConfigurer configurer(ObjectMapper objectMapper) { return registrar -> registrar.manageMessageConverters( list -> list.addAll( 0, List.of( new SQSEventModelMessageConverter( objectMapper, ReciveEventModel.class), ... } @RequiredArgsConstructor private static class SQSEventModelMessageConverter implements MessageConverter { private static final String SQS_EVENT_FILED_MESSAGE = "Message"; private final ObjectMapper objectMapper; private final Class<?> modelClass; @Override public Object fromMessage(Message<?> message, Class<?> targetClass) { if (modelClass != targetClass) { return null; } try { val payload = objectMapper .readTree(message.getPayload().toString()) .get(SQS_EVENT_FILED_MESSAGE) .asText(); return objectMapper.readValue(payload, targetClass); } catch (IOException ex) { throw new MessageConversionException( message, " Could not read JSON: " + ex.getMessage(), ex); } ... } S3 For uploads to S3, TransferManager was changed to S3TransferManager , and the implementation of issuing signed URLs needed to be fixed. SNS The sns:CreateTopic privilege was required with DefaultTopicArnResolver for sending SNS. We made it so TopicsListingTopicArnResolver can be used now, and CreateTopic permission is no longer needed. @ConditionalOnProperty("spring.cloud.aws.sns.enabled") @Configuration public class SNSConfig { @Bean public TopicArnResolver topicArnResolver(SnsClient snsClient) { return new TopicsListingTopicArnResolver(snsClient); } } Around the API WebSecurityConfigurerAdapter cannot be referenced We switched to a method that uses SecurityFilterChain referring to links. spring-security Making stricter URL paths Trailing Slash paths are now strictly differentiated. Since this system was linked with another internal system, we added a path to @RequestMapping , adjusted it with the peer system, and removed the added path. Before addition After addition ... @RequestMapping( method = RequestMethod.GET, value = {"/api/payments/{id}"}, ... ... @RequestMapping( method = RequestMethod.GET, value = {"/api/payments/{id}", "/api/payments/{id}/"}, ... Renaming properties (application.yml) Properties such as jpa, redis, and spring-cloud-AWS were renamed. We adjusted them according to the official information. Deployment There is a 404 with ECS deployment We reached the point where we could deploy to ECS and confirmed the launch in the application log, but when I accessed the API, there was a 404. I checked and saw that the Health Check failed the ECS deployment. With the help of our cloud platform engineers, we discovered that the AWS-opentelemetry-agent version used for telemetry data collection was outdated. By changing to a version of jar 1.23.0 or later, we can successfully deploy and check API communication. OpenTelemetryJava provided by AWS Results, additional knowledge and application proposals, next attempts, etc. There are places that did not have the general implementation pattern of Spring Boot which meet various requirements. The structure did not allow us to migrate easily with just migration guide sometimes. We managed to release it after repeated trial and error. I would like to thank the team for their continued work and reviews. We will continue to address the following remaining issues while also taking advantage of the features that were improved in Spring Boot 3. Swagger-UI We put off upgrading this. Since spring-fox is not yet compatible with Spring Boot 3, we are considering changing to springdoc-openapi. Spring Batch + MySQL 8.0 + DbUnit It results in an error if it meets some conditions. It seems that it is related to Spring Batch transaction management (meta table operation), and we are looking into how to fix it. Summary of the Lessons of This Article We were able to upgrade Spring Boot by repeating the build and test while referring to the migration guide. This update had a wide range of effects, but by preparing tests, we found out what we had to fix, and we were able to address them efficiently. We also found problems by running operations such as trying to change @EnableBatchProcessing, so we also had to check by running operations. Regarding JavaEE, with the change to Jakarta, we had to update the Spring Boot library and others. Security is stronger (trailing slash rules are stricter, AuthFilter can be used with Ignore Path, etc.). There were differences in the updates for each dependent library, and Spring Cloud AWS was especially different. We may have needed to make less changes if we had upgraded libraries more frequently. Thank you for reading This article. I hope that it will be useful for those who are also considering upgrading. [^1]: Posted by a member of the Common Services Development Group [ Domain-Driven Design (DDD) is incorporated into a payment platform with a view to global expansion ] [^2]: Posted by a member of the Common Services Development Group [ New Employees Develop a New System with Remote Mob Programming ] [^3]: Posted by a member of the Common Services Development Group [ Improving Deployment Traceability with JIRA and GitHub Actions ] [^4]: Posted by a member of the Common Services Development Group [ Building a Development Environment with VSCode Dev Container ]
アバター
Room Migration
Introduction Hello, I'm Hasegawa from KINTO Technologies. I usually work as an Android engineer, developing an application called "my route by KINTO." In this article, I will talk about my experiences with database migration while developing the Android version of my route by KINTO. Overview Room is an official library in Android that facilitates easy local data persistence. Storing data on a device has significant advantages from a user's perspective, including the ability to use apps offline. On the other hand, from a developer's perspective, there are a few tasks that need to be done. One of them is migration. Although Room officially supports automated database migration, there are cases where updates involving complex database changes need to be handled manually. This article will cover simple automated migration to complex manual migration, along with several use cases. What Happens If a Migration Is Not Done Correctly? Have you ever thought about what happens if you don't migrate data correctly? There are two main patterns, determined by the level of support within apps. App crashes Data is lost You may have experienced apps crashing if you use Room. The following errors occur depending on the case: When the database version has been updated, but the appropriate migration path has not been provided A migration from 1 to 2 was required but not found. Please provide the necessary Migration path When the schema has been updated, but the database version has not been updated Room cannot verify the data integrity. Looks like you've changed schema but forgot to update the version number. When manual migration is not working properly Migration didn't properly handle: FooEntity(). Basically, all of these can occur in the development environment, so I don’t think it will be that much of a problem. However, it should be noted that if the fallback~ described below is used to cover up migration failures, it may be very difficult to notice, and in some cases it may occur only in the production environment. What about "data loss"? Well, Room can call fallbackToDestructiveMigration() when you create the database object. This is a function that permanently deletes data if migration fails and allows apps to start normally. I'm not sure if this is to address the errors mentioned above or to avoid the time-consuming process of database migration, but I have seen it used occasionally. If this is done, data loss will occur in the event of a migration failure which is difficult to detect. Therefore, it is best to strive for successful migrations. Four Migration Scenarios Here are four examples of schema updates that may occur in the course of app development. 1. New Table Addition Since adding a new table does not affect existing data, it can be automatically migrated. For example, if you have an entity named FooClass in DB version 1 and you add an entity named BarClass in DB version 2, you can simply pass autoMigrations with AutoMigration(from = 1, to = 2) as follows. @Database( entities = [ HogeClass::class, HugaClass::class, // Add ], version = 2, // 1 -> 2 autoMigrations = [ AutoMigration(from = 1, to = 2) ] ) abstract class AppDatabase : RoomDatabase() {} 2. Delete or Rename Tables, Delete or Rename Columns Automated migration is possible for deletion and renaming, but you need to define AutoMigrationSpec . As an example of a column name change that is most likely to occur, suppose a column name of the entity User is changed to firstName . @Entity data class User( @PrimaryKey val id: Int, // val name: String, // old val firstName: String, // new val age: Int, ) First, define a class that implemented AutoMigrationSpec . Then, annotate it with @RenameColumn and give the necessary information for the column to be changed as an argument. Pass the created class to the corresponding version of AutoMigration and pass it to autoMigrations . @RenameColumn( tableName = "User", fromColumnName = "name", toColumnName = "firstName" ) class NameToFirstnameAutoMigrationSpec : AutoMigrationSpec @Database( entities = [ User::class, Person::class ], version = 2, autoMigrations = [ AutoMigration(from = 1, to = 2, NameToFirstnameAutoMigrationSpec::class), ] ) abstract class AppDatabase : RoomDatabase() {} Room provides additional annotations, including @DeleteTable , @RenameTable , and @DeleteColumn , which facilitate the easy handling of deletions and name changes. 3. Add a Column Personally, I think the addition of column is most likely to occur. Let's say that for the entity User , a column for height height is added. @Entity data class User( @PrimaryKey val id: Int, val name: String, val age: Int, val height: Int, // new ) Adding columns requires manual migration. The reason is to tell Room the default value for height. Simply create an object that inherits from migraton as follows and pass it to addMigration() when creating the database object. Write the required SQL statements in database.execSQL . val MIGRATION_1_2 = object : Migration(1, 2) { override fun migrate(database: SupportSQLiteDatabase) { database.execSQL( "ALTER TABLE User ADD COLUMN height Integer NOT NULL DEFAULT 0" ) } } val db = Room.databaseBuilder( context, AppDatabase::class.java, "database-name" ) .addMigrations(MIGRATION_1_2) .build() 4. Add a Primary Key In my app experience, there were cases where a primary key was added. This is the case when the primary key that was assumed when the table was created is not sufficient to maintain uniqueness, and other columns are added to the primary key. For example, suppose that in the User table, id was the primary key until now, but name is also the primary key and becomes a composite primary key. // DB version 1 @Entity data class User( @PrimaryKey val id: Int, val name: String, val age: Int, ) // DB version 2 @Entity( primaryKeys = ["id", "name"] ) data class User( val id: Int, val name: String, val age: Int, ) In this case, not limited to Android, the common method is to create a new table. The following SQL statement creates a table named UserNew with a new primary key condition and copies the information from the User table. Then delete the existing User table and rename the UserNew table to User . val migration_1_2 = object : Migration(1, 2) { override fun migrate(database: SupportSQLiteDatabase) { database.execSQL("CREATE TABLE IF NOT EXISTS UserNew (`id` Integer NOT NULL, `name` TEXT NOT NULL, `age` Integer NOT NULL, PRIMARY KEY(`id`, `name`))") database.execSQL("INSERT INTO UserNew (`id`, `name`, `age`) SELECT `id`, `name`, `age` FROM User") database.execSQL("DROP TABLE User") database.execSQL("ALTER TABLE UserNew RENAME TO User") } } Let's Check If The Migration Works Correctly! There are many more complex cases in addition to the migration examples above. Even in the app I am involved in, there have been changes to tables where foreign keys are involved. In such a case, the only way is to write SQL statements, but you want to make sure that the SQL is really working correctly. For this purpose, Room provides a way to test migrations. The following test code can be used to test whether migration is working properly. In order to test, the schema for each database version needs to be exported beforehand. See Export schemas for more information. Even if you did not export the schema of the old database version, it is recommended that you identify the past version from git tags, etc. and export the schema. The point is to refer to the same values as the migration to be performed in the production code and the test code, as in the variable defined in the list manualMigrations . This way, even if you added migration5_6 in the production code, you can rest assured that the test code will automatically verify it. // production code val manualMigrations = listOf( migration1To2, migration2To3, // 3->4automated migration migration4To5, ) // test code @RunWith(AndroidJUnit4::class) class MigrationTest { private val TEST_DB = "migration-test" @get:Rule val helper: MigrationTestHelper = MigrationTestHelper( InstrumentationRegistry.getInstrumentation(), AppDatabase::class.java, ) @Test @Throws(IOException::class) fun migrateAll() { helper.createDatabase(TEST_DB, 1).apply { close() } Room.databaseBuilder( InstrumentationRegistry.getInstrumentation().targetContext, AppDatabase::class.java, TEST_DB ).addMigrations(*ALL_MIGRATIONS).build().apply { openHelper.writableDatabase.close() } } } Summary Today I talked a bit about Room migration with a few use cases. I'd like to avoid manual migrations as much as possible, but I believe the key to achieving that is ensuring the entire team is involved in table design. Also, remember to export the schema for each database version. Otherwise, it would be a bit difficult for future developers to go back, export the schema using git, etc., and verify it. Thank you for reading. Reference https://developer.android.com/training/data-storage/room/migrating-db-versions?hl=ja
アバター
Hello. I am Ohsugi from the Woven Payment Solution Development Group. My team is developing the payment system used by Woven by Toyota for Toyota Woven City . We typically use Kotlin/Ktor for backend development and Flutter for the frontend. In a previous article , I discussed the process of selecting the frontend technology prior to the development of our web application. Since then, we have expanded our operations and are currently working on seven Flutter applications, including both web and mobile platforms. In this article, I will talk about how our team, which had only backend engineers, came up with ways to efficiently develop multiple Flutter applications in parallel with backend development. Flutter and the related logo are trademarks of Google LLC. We are not endorsed by or affiliated with Google LLC. Overview As mentioned, our team does backend and frontend development for a payment system. It will be a payment application to be used by people, accessible through either a web-based management interface or a Proof of Concept (PoC) on mobile devices at Woven City. In order to develop the Flutter app efficiently in parallel with the backend development of the payment system, we decided to take the following steps. Design a Common Application Architecture Form a Design Policy for Lazy UI Components Define the Tech Stack Unification and Development Flow Design a Common Application Architecture There were various proposals for both backend and frontend architectures over the years, but I think it is best to pick one that suits the development team and product phase and improve along the way. We have adopted a clean architecture for backend development and applied an architecture for Flutter applications that uses only MVVM and repository patterns with a similar layer-first directory structure. To be specific, the directory structure is as follows. Directory Configuration lib/ ├── presentations │ ├── pages │ │ └── home_page │ │ ├── home_page.dart │ │ ├── home_page_vm.dart │ │ ├── home_page_state.dart │ │ └── components │ ├── components │ ├── style.dart // a common style definition │ └── app.dart ├── domains │ ├── entities │ └── repositories // repository interface ├── infrastructures │ └── repositories └── main.dart Directory Roles There are three main directories that make up the layer structure. Here are some brief descriptions of each directory's role. Directory Layer Role presentations Presentation Layer Defines View, ViewModel, and if necessary, the states domains Domain Layer Defines interfaces for domain models, logic, and repositories infrastructures Infrastructure Layer Defines repository tier implementations, including those for API calls When designing with a layer pattern, you may want a use case layer, but there is currently very little business logic in the frontend, so we have included it in ViewModel. The application we are developing does not have complex functions yet, and is basically page = one domain, so we are proceeding smoothly with this design. When creating a new app with a PoC, we usually start with this template so that there are no differences in architecture between applications. Form a Design Policy for Lazy UI Components When designing UI components, we decided not to adopt Atomic Design and to not make too many common components . There are some drawbacks, but we did it this way for the for the following reasons: It was difficult for all members to get the same sense of the levels in the Atomic Design classifications We wanted to focus on page implementation rather than making common components Most importantly, it takes a lot of energy to build an abstract widget with Flutter I think that making common components is more common, but at this point in time, we are in the phase of making changes to the application while flexibly changing specifications, and we decided that it would be more beneficial to not standardize it much in the short term. Define the Tech Stack Unification and Development Flow Many different technologies for state management and screen transition frameworks have come and gone. Beginners get confused about which library to use because there is so much information. I have experienced it as well and can relate. So we decided to use the following tech stack across all applications. Tech Stack Target Library State management and provider creation riverpod Model definition freezed Screen transition go_router API client dio , openapi-genrator Project management melos :::message We are doing schema-driven development using OpenAPI and automatically generate frontend API clients using openapi-generator based on the OpenAPI schema yaml file created in during backend development. ::: We use Riverpod for the state management and provider creation . Riverpod's concept of providers is not well-suited for backend development, and since it is possible to implement the provider as you like by coding by hand, the implementation flow and application location are defined somewhat strictly. Make sure to use riverpod_generator to generate the provider The provider is used when binding the infrastructure layer repository to the domain layer interface @riverpod Future<HogeRepository> hogeRepository(HogeRepositoryRef ref) { final apiClient = await ref.watch(openApiClientProvider); return HogeRepositoryImpl( apiClient: apiClient.getHogeApi(), ); } The ViewModel is implemented with AsyncNotifierProvider, and the provider of the repository required by View is aggregated into ViewModel @riverpod class HogePageViewModel extends _$HogePageViewModel { @override Future<List<Hoge>> build() { return _fetchData(); } Future<List<Hoge>> _fetchData() { repository = ref.watch(hogeRepositoryProvider); return repository.getList(); } Future<void> registerData(Hoge hoge) { repository = ref.watch(hogeRepositoryProvider); return repository.register(hoge); } } View monitors the AsyncValue from the ViewModel and displays the UI. Alternatively, CRUD is processed to the repository via ViewModel As mentioned above, we defined the process from repository implementation to embedding the UI and backend. Tickets are divided by particle size according to the process when making sprint tasks. Conclusion As the client application development priority rose in the project, we established a frontend development policy and came up with ways to develop smoothly as a team. Since many web management screens have a basic set with a list screen, details screen, and editing screen, we are also thinking about implementing measures to implement UI more efficiently using code generators in the future.
アバター
はじめに こんにちは!KTCでデータサイエンティストをしている和田( @cognac_n )です。 2024年1月にKTCにおいて「 生成AI活用PJT 」が発足し、この度そのメンバーとしてアサインされました。今回はこのプロジェクトについて紹介をしようと思います。 生成AIとは 文字通り「新しいデータを生成するAI」を指します。2023年11月にOpenAIがChatGPTを公開したことで、一躍注目を集めるようになりました。 AIはこれまで幾度もの一時的なブーム(*1)を繰り返してきましたが、生成AIの発展による「第4次AIブーム(*2)」は単なるブームを超えて、生活や仕事への定着がはじまっていますね。これからますます盛んになるであろう生成AIの活用は、生活の常識、仕事の常識の多くをひっくり返してしまうほどのインパクトがあると、私は考えています。 これまでの取り組み プロジェクトの発足は2024年1月ですが、生成AIの活用には以前から取り組んでいました。取り組みの一部を、少しだけ紹介します。 AI ChatBotを社内用のSlackBotとして内製開発 生成AIをテーマにした社外向けハンズオンイベントの開催 生成AIツールの社内普及活動 生成AIを活用した、カスタマーセンター業務のDX 生成AIを活用した、新規サービスの企画〜開発 などなど。しかし実は工数事情により、泣く泣く着手ができなかった取り組みもたくさんあります・・・。 今回、プロジェクトとして正式に組織が立ち上がったことで、さらに幅広く、生成AIの活用を推進していけると思います。今から楽しみです! プロジェクトが目指すこと 私たちの姿勢 大切にしたいのは、技術を通じて「会社の事業活動に貢献すること」です。 目指すのは、「課題解決型の組織」として社内の課題を圧倒的な「スピード、質、量」で解決していくこと。試して終わりの評論家となるのではなく、あくまでも価値にこだわる組織として活動していきます! 会社に与えたい影響 従業員のひとりひとりが、当たり前に生成AIを活用できる会社となることを目指します! ・・・とはいうものの、それってどんな状態でしょうか? 「この業務は生成AIに向いている、任せられる」と気づくことができる タスクに応じた、基本的なプロンプトの作成ができる 「生成AIによるアウトプット」を受け入れることのできる文化が醸成されている 例えば、こんな状態でしょうか。激しく変化する生成AIの世界で、 どんな姿を目指していくべきか?は常に考え続ける必要があると思います。 そのために・・・ プロジェクトでは現在、生成AIについての取り組みを3つのレベルに分けて考えています。 レベル1:既存のシステムで、まずは「やってみる」 レベル2:最低限の開発で、更に価値を生み出す レベル3:事業に与える価値を最大化する レベル分けと取り組みの進め方は、以下のようなイメージです。 生成AIについての取り組みレベル分け 取り組みの価値を見積りながら、適切なレベルを目指していく これは全ての取り組みでレベル3を目指すべきという意味ではありません。レベル1の段階で十分な価値の創出ができれば、費用と工数をかけてレベル2に進める必要がない場合もあるでしょう。重要なのは、たくさんのアイデアをレベル1で「やってみる」こと。その為には、会社の非エンジニアも含めた全社員がレベル1を実施可能なくらい、高いAIリテラシーを備えていることが理想的です。 今後、取り組んでいきたいこと 伴走型の「やってみる」から 社内で使える生成AIツールが導入されて数ヶ月経ちますが、まだまだ「何ができるか分からない」「いつ使えばいいか分からない」という声を聞きます。まずは生成AIの知見がある私たちが「どんな業務が生成AIに向いているか」「どんなプロンプトを書けばいいのか」と丁寧にサポートしながら、生成AIの活用事例を増やしていこうと考えています。 最初は丁寧なサポートを受けながら「やってみる」 社内で生成AI活用した価値の創出事例を増やす 社内の生成AI利用を「当たり前」に 自律型の「やってみる」へ 伴走型の課題解決だけでは私たちの工数がボトルネックになってしまい、スケールしません。業務担当者自身が「この業務は生成AIに向いている、任せられる」と気づき、基本的なプロンプトでレベル1の活用を「やってみる」ことができるようにしていきたいです。 レベル1の活用を業務担当者自身ができるようにする レベル1の改善や、レベル2へ進むための相談を私たちが受け付ける それらを実現する為の研修 社員のAIリテラシーを底上げするための内製研修を充実させます。 多くの社員が生成AIについて共通の認識を持ち、生成AI活用についてスムーズな会話ができたり、生成AIによるアウトプットを受け入れることができる文化の醸成を狙います。 内製のITリテラシー研修を充実させる 職種やスキルレベルに合わせて、研修を作り分ける 画像生成編、要約編、翻訳編など細かな粒度で実施する 受講者のフィードバックをもとに、本当に必要とされている研修を短納期で提供する 情報発信 このテックブログをはじめ、様々な媒体で私たちの取り組みを発信していきます。生成AIの技術レビューや、プロジェクトの取り組み紹介など、さまざまなコンテンツを計画中です。ぜひお楽しみに! おわりに ここまで読んでいただき、ありがとうございました! 抽象的な話が多くなりましたが、私たちのように、生成AI活用を目指す方の参考となれば幸いです。 参考文献 [*1] 総務省. "人工知能(AI)研究の歴史". (参照 2024-01-16) [*2] 野村総合研究所. "生成AIで変わる未来の風景". (参照 2024-01-16)
アバター
はじめに こんにちは。KINTOテクノロジーズ(KTC)グローバル開発部のフロントエンドエンジニア、Daichiです。現在は KINTO FACTORY のECサイトを開発しています。KINTO FACTORYは、トヨタ車とレクサス車のオーナー様向けの車体アップグレードサービスです。3つのサービス(リフォーム、アップグレード、パーソナライズ)を通して、最新のハードウェアとソフトウェアを車体に取り入れていくことができます。 成長が著しいECサイトとしては、より多くのユーザーにリーチし、より良いユーザーエクスペリエンスを提供していきたいので、SEOとページ読み込み時間が重要になります。Core Web Vitalsスコアを最適化してKINTO FACTORYのSEOとページ読み込み時間を改善していった過程をご紹介したいと思います。以下に詳しく説明していきます。 Core Web Vitalsとは? Core Web Vitals は、ページの読み込みパフォーマンス、インタラクティブ性、視覚的安定性に関する実際のユーザー エクスペリエンスを測定する一連の指標で、Googleが開発しました。 Googleは、2021年5月、SEOに影響する検索順位を決定する要素として、Core Web Vitalsを発表しました。 2023年現在、主要なCore Web Vitals指標は3つあります。 Largest Contentful Paint (LCP) -端末(PCやスマートフォンの画面)で最も重い画像またはテキストのまとまりを読み込むのにかかる時間を測定。 First Input Delay (FID) -ユーザーがページに対して操作(ボタンクリック、タップ、入力など)をした際にブラウザが応答するのにかかる時間を測定。 - Interaction to Next Paint (INP) という類似の指標があるが、こちらは最初の読み込みが終わった後の応答性に関するもの。Googleは、2024年3月よりFIDに代わりINPを採用することを発表。 Cumulative Layout Shift (CLS) -ウェブページ読み込み中の視覚的安定性を測定。 最適化前 ウェブサイトを測定するツールはたくさんありますが、私は Google PageSpeed Insights をお勧めします。改善の対象となるエリアについての詳細レポートを取得でき(問題となる箇所の改善方法もわかります)、自分のページのリアルなパフォーマンス(Chromeブラウザのデータに基づく)の状態を確認できます。 最適化前のKINTO FACTORYのスコアは、モバイルとデスクトップで以下の通りでした。上記の図(Core Web Vitalsのしきい値)を参照した上で、下図の結果を見ていただくとわかるように、赤字部分は好ましくない結果になっています。 最適化前 モバイル デスクトップ レポートを分析したところ、ページ読み込みが遅い主な要因は画像でした。 先頭ページで読み込まれるアセット(特に画像)が重すぎた。モバイルで約13MB、デスクトップで約14MB。 画像サイズが大きすぎた(ほとんどの画像は300 KB以上)。 画像サイズが画面サイズに合っていない。モバイルとデスクトップで同じ画像を使用している。 Largest Contentful Paint画像は読み込みが遅くなる。 幅と高さが指定されていない画像要素が原因で、ウェブサイト全体のレイアウトがずれてしまう。 マークアップとCSSの実行の仕方により、モバイルとデスクトップそれぞれで固有の画像を毎回読み込んでいる。 モバイルアセットのサイズ (変更前) デスクトップアセットのサイズ(変更前) 最適化後 Webサイトのパフォーマンスを測定し、ページの速度を低下させる問題が複数あることがわかったので、対応を開始しました。 KINTO FACTORY最適化にあたり行ったことは以下の通りです。 - 全画像をチェックし、画面サイズに従って最適化。- 画像ごとに適切なフォーマットを使用。 ファーストビューのLargest Contentful Paint画像を始めとしてwebp画像も使用。 ファーストビューに表示されない 読み込みが遅い 画像は、必要なタイミング(画面表示するとき)で読み込みされるようにする。 同時に、読み込みが遅い画像はファーストビューに設定しないようにする。- レイアウト変動が起きないよう、画像の高さと幅を設定する(特にファーストビューにあるLargest Contentful Paint画像)。 フォント読み込み(Googleフォント)の速さ向上のため rel=preconnect resource hint と早い段階で接続。 画像エレメントはレンダリングされていたが、結果的にはスタイリング(css)だけで表示されていたため、各ページ読み込みの際に不必要な画像を読み込んでしまうモバイル、デスクトップのマークアップによるレイアウトは回避。 以下のコードの通り: ```html <!-- Before --> <img src="pc-image.png" class="show-on-desktop-size" /> <img src="sp-image.png" class="show-on-mobile-size" /> <!-- After --> <picture> <source media="(min-width: 600px)" srcset="sp-image.png" /> <img src="pc-image.png" alt="🙂" /> </picture> ``` 上記の最適化を実装した結果、次のことが可能になりました。 アセットサイズを 60% 以上削減。 ページ読み込み時間を改善。 Cumulative Layout Shift (CLS) をほぼゼロに削減。 最適化後(モバイル画面/デスクトップ画面) 最適化前 最適化後 モバイルアセットのサイズ (最適化後) デスクトップアセットのサイズ (最適化後) 結論 Core Web Vitalsは、Webサイトの全体的なパフォーマンスを測定できる優れた方法です。各レポートから分かるように、アセット(画像、フォント)をシンプルに最適化するだけで、ユーザーエクスペリエンスが向上し、検索結果で上位にランクされ、SEOが向上します。KINTO FACTORYの第一歩として、トップページの最適化を行いましたが、大きな一歩だったと思います。ただし、最適スコアにはまだ達していないので、さらに対処を進めて、全てのユーザーに最高のエクスペリエンスを提供できるようにしたいと思います。
アバター
Introduction Hi! Thank you for your interest in my article! I am Yutaro Mikami, an engineer in the Project Development Division at KINTO Technologies. I joined the company in September this year and usually work as a front-end engineer on the development of KINTO FACTORY. In this article, I will write about my experience and efforts since joining KINTO Technologies, focusing on the theme of "Agile." Topic As indicated by the title, I will talk about the initiatives we have undertaken to ensure accurate progress management in our team's Agile development, where our burndown chart shows that the actual work line is consistently above the ideal work line. (Good!👍) Main Body What Progress Management Should Be Burndown charts provide a quick overview of the decreasing remaining workload, offering the following effects and benefits: Reporting progress to stakeholders Maintaining visual motivation for developers Early detection of task stoppers Promoting team cooperation and collaboration Definition of Current Issues With the above in mind, I will use my team's burndown charts for a sprint and summarize the issues I've identified. Burndown chart before Kaizen (improvement) As the work progresses, the graph goes down naturally, but the size of the gap between the graph rising irregularly and the ideal line at the end is noticeable. Conclusion Reporting progress to stakeholders The report becomes unreliable because the team is unsure whether the rise in the graph is intended or not. Maintaining visual motivation for developers There is no downward trend in the graph, making it difficult to maintain motivation due to a lack of successful experience. Early detection of task stoppers Since we report progress on a daily basis, the team is aware that the task progress is falling behind. However, spotting task stoppers from the graph proves challenging, making it difficult to see the task progress is stagnating. Promoting team cooperation and collaboration There is adequate communication on a daily basis, but few cases of cooperation and collaboration through charts. Kaizen Goals A goal is not the end of the process, but for the sake of clarity, I used the word "goals" to describe what a team should aim for. The defined goals in this article are as follows: Be able to understand and control the progress of tasks through charts Be able to recognize the reasons for the rise while allowing the graph to rise as a team A sense of accomplishment through charts should be felt by each developer Be able to promote cooperation and collaboration throughout the team After starting Kaizen We are still in the process of Kaizen , but the chart is currently on an improving trend. Latest burndown chart What We Did Step 1: "Cultivating Awareness" Kaizen Stop the approach of simply stacking tasks into the sprint. This was the only specific action, but I think it was very effective. By having this awareness within the entire team, we have successfully reduced the gap between the ideal line at the end of the sprint. In addition, the accuracy of velocity has improved, with the expectation of further enhancements to estimate precision. What We Did Step 2: "Planning" Kaizen Set up a "place to store tickets in the next sprint" The breakdown of additional tickets to be stacked during the sprint includes two main categories: Forgotten tickets during planning Tickets added during the sprint The tickets added during the sprint have several factors, and as it was difficult to improve in the short term, we first took action to prevent them from being left in the sprint. We created a "task placement tickets to be stacked in the next sprint" to the backlog and started to discussions on the tasks listed above (red frame in the image) during the planning process. This led to the realization of the following effects. Prevention of forgetting to stack By adding action of moving tasks to the top of the backlog before planning eliminates forgetting to stack. Improvement of task comprehension The breakdown time per ticket has improved each member's understanding of the task. Stack the appropriate number of tasks into the sprint This is also linked to the Kaizen of Awareness, providing the opportunity to select the tasks to be stacked, rather than just stacking them anyway, allowing the appropriate number of tasks to be added in a sprint. What We Did Step 3: " Kaizen Meetings" In addition to the retrospective, time was set aside for members to discuss issues and improvements in our daily Scrum activities. Discussing issues to be addressed in the short to long term and deciding on next actions raised the team's awareness. Results Reporting progress to stakeholders The report becomes unreliable because the team is unsure whether the rise in the graph is intended or not. Improvements in graph accuracy and reliability have made it possible to report accurate progress. Maintaining visual motivation for developers There is no downward trend in the graph, making it difficult to maintain motivation due to a lack of successful experience. There is a clear downward trend and successes have been achieved. Actions on tickets added during the sprint are a future challenge. Early detection of task stoppers Since we report progress on a daily basis, the team is aware that the task progress is falling behind. However, spotting task stoppers from the graph proves challenging, making it difficult to see the task progress is stagnating. Continue to report progress on a daily basis. I get the impression that the stoppers are ready to be seen from the graph. Promoting team cooperation and collaboration There is adequate communication on a daily basis, but few cases of cooperation and collaboration through charts. There have been no cases of cooperation and collaboration through charts yet, but I feel that we have been able to build a system that allows cooperation and collaboration, because we can now better understand who is working on which tasks than before. Conclusion So this will conclude my article about conducting Scrum Kaizen as we analyzed our burndown charts. Thank you for reading till the end. I have once again realized that iterating Kaizen is necessary not only for products but also for teams and processes to continue Scrum as a team. Objective improvements using reports and charts that are based on data makes it easy to identify issues, and visible improvements help maintain motivation. I hope this helps your team's Kaizen as well! Lastly, KINTO FACTORY, where I belong, is looking for people to join us! If you are interested, feel free to check out the job openings below. @ card @ card
アバター
はじめに こんにちは。KINTO Technologiesのグローバル開発部でフロントエンド開発をしているクリスです。 今日はフロントエンドの開発におけるちょっとした詰まったこととそれの解決策について紹介したいと思います! 詰まったこと 普段みなさんは以下のようにアンカータグ(aタグ)を使ってとあるページの特定部分までスクロールさせたい時ありますよね? スクロール先の要素にidを付与し、aタグに href="#{id}" をつければそれが実現できます。 <a href="#section-1">Section 1</a> <a href="#section-2">Section 2</a> <a href="#section-3">Section 3</a> <section class="section" id="section-1"> Section 1 </section> <section class="section" id="section-2"> Section 2 </section> <section class="section" id="section-3"> Section 3 </section> 記事や規約など長いページだと、ユーザーにとって役に立ちます。 しかし、現実では多くの場合、ヘッダーといったページの上に固定する要素があって、aリンクをクリックし、スクロールされた後に少し位置がずれてしまいます。 例えば以下のようなヘッダーがあるとします。 <style> header { position: fixed; top: 0; width: 100%; height: 80px; background-color: #989898; opacity: 0.8; } </style> <header style=""> <a href="#section-1">......</a> <a href="#section-2">......</a> <a href="#section-3">......</a> ... </header> あえてこのヘッダーを少し透過にしましたが、aリンクをクリックして、移動になった後に、一部のコンテンツがヘッダーの後ろに隠れてしまったことがわかります。 HTMLとCSSだけを用いた解決策 aリンクをクリックした時に、Javascriptでヘッダーの高さを取得し、スクロール位置からヘッダーの高さを引いてスクロールさせれば問題解決できますが、今日はHTMLとCSSを用いた解決策を紹介したいと思います。具体的には本来到達したい <section> より少し上に別の <div> を用意し、その要素までスクロールさせる方法です。 先ほどの例に戻って、まず各セッションの中に一つのdivタグを作ります。そして該当divタグに一つのclass、例えば anchor-offset を付与し、さらに元々 <section> タグに付与したidも新しく作った div タグに移します。 <section> <div class="anchor-offset" id="section-1"></div> <h1>Section 1</h1> ... </section> そしてcssで <section> タグと .anchor-offset のスタイル定義をします。 /* アンカーを設置する必要がある要素のみ付与したい場合はclassを利用 */ section { position: relative; } .anchor-offset { position: absolute; height: 80px; top: -80px; visibility: hidden; } 上記のように設定すると、aリンクをクリックした時に、該当する <section> の本位置ではなく、それより少し(例の場合では80px)上の部分までスクロールされ、ヘッダーの高さ(80px)と相殺されます。 Vueにおける書き方 Vueでは値を cssにバインドする ことができます。この機能を利用し、高さを動的に設定しコンポーネントにすれば、さらにメインテナンスしやすくなると思います。 <template> <div :id="props.target" class="anchor-offset"></div> </template> <script setup> const props = defineProps({ target: String, offset: Number, }) const height = computed(() => { return `${props.offset}px` }) const top = computed(() => { return `-${props.offset}px` }) </script> <style scoped lang="scss"> .anchor-offset { position: absolute; height: v-bind('height'); top: v-bind('top'); visibility: hidden; } </style> まとめ 以上、aタグでページの特定部分までスクロールする際にヘッダーなどの固定要素に合わせたスクロール位置の調整方法でした。 他にも色々なやり方がありますが、ご参考になれたらと思います!
アバター
👋Introduction Hello! I am Sasaki, a Project Manager in the Project Promotion Group at KINTO Technologies. In my career to date, I have worked as a programmer, designed as a Project Lead, trained members, and handled tasks akin to those of a Project Manager (defining requirements, managing stakeholders, etc.). In my previous job, I worked on Agile with the whole team for about three years and went through a real Kaizen (improvement) journey. As I am passionate about this topic, I really wanted to write an article about Agile development today! 🚗Toyota and Agile How are you incorporating Agile development methodology into your team? There are various forms of Agile development, such as Scrum for new services and Kanban for operation and maintenance. However, when learning Agile development, many of you may have encountered Lean Development and the Toyota Production System, which is said to be the origin of Agile development[^1]. In this article, I will visualize the approaches to Agile of KINTO Technologies, a Toyota group company. I also hope to help those who are working on Agile in the company gain new insights through visualization. [^1]: Agile books citing Toyota The Agile Samurai Lean from the Trenches Kanban in Action and more ::: message ### This article is useful for - those who want to understand their team's Agile state - those who are a bit stuck in a rut when it comes to how to proceed with Agile - those who are facing challenges reconciling Agile ideals with their realities - those who want to know about KINTO Technologies' approach to Agile ::: Method Quantitative visualization of each team's level of Scrum with the Scrum Checklist Discussion while reviewing the results of Step 1 Casually sharing teams future plans First, use the Scrum Checklist to visualize how much of each Scrum indicator have you accomplished so far. ![Sample: Results of Scrum Checklist](/assets/blog/authors/K.Sasaki/image-20231120-002531.png =400x) Once visualized, let discussions begin. Use the 4L Reflection Framework for discussion. https://www.ryuzee.com/contents/blog/14561 :::details Notes on the use of Scrum Checklist The provided Scrum Checklist has a note. Do not use it to compare with other teams for evaluation. It is not intended to compete with other teams. Instead, we use it as an opportunity for discussion to place different Agile teams in a similar context. If you use it in a similar way to this article, please avoid using it as a way to judge or evaluate people or teams, and use it among members in a constructive and mature manner. ::: 🎉Participating Members We asked for cooperation from Scrum Masters -or people in similar positions- who manage Scrum or Agile-like teams in their organization, and 10 teams (10 people from different teams) came! Thank you all for your time and cooperation! How we did it ✅ Scrum Checklist We made various charts. The results varied widely depending on the team's situation, such as some people saying "Although what we do is close to Waterfall, I am running a Scrum event", or others expressing "I felt that there were some issues, but the score came out higher than expected." Some teams had indicators with low scores but no major current issues, such as "We do not have a Scrum Master, but we are rotating Scrum events among developers," or "We do not have a product backlog, but we have a good relationship with the owner." Since each of the participants had different areas of expertise, we were able to encourage mutual learning by having participants teach each other about indicators that some were less familiar. Many teams in Group A had organized backlogs, while many teams in Group B were experiencing challenges with their backlogs. Maybe we can exchange knowledge on organizing backlogs...👀 📒Reflection (4L Reflection) We split into two groups and reflected. In my previous career, I tried hard to get people to speak up, but at KINTO Technologies, the board filled up in 5 to 8 minutes, giving the impression that they were active in sharing their opinions. The red sticky notes are their impressions after seeing other people's notes. Group A Results Group B Results This time, we used the WhiteBoard, a new addition to Confluence recommended by Kin-chan . Sticky notes can be converted directly into JIRA tickets, which can be used to organize action items. 🚩Results of the Reflection Here is some of the feedback among the many voices. Many people expressed a desire to strengthen relationships with product owners (POs) to optimize the use of their services and get faster. I got the impression that many teams were highly self-organized. Liked Visualization helped us understand the team's strengths and weaknesses We were able to understand the areas where we diverged from the ideal Scrum That developers are able to work responsibly and autonomously (self-organized) Lacked Product Owners are not present or not included in many Scrum events Story Point (SP) setting and estimation are not done well Due to the increase in team members, some feel the need to split up the team. Learned I was able to learn about different Agile initiatives and products in our company Sprint periods can be set shorter or longer depending on each team's situation Longed for (excerpt) Although not action items, they were able to set themselves informal goals to keep growing. To revise the length of their Sprints To split teams into smaller ones To improve communication with POs 💭Thoughts I was really surprised that people from different departments and offices, some of whom I had never met before, participated when I called them to gather for this session, even though it was my second month in the company. I would like to thank everyone again for their cooperation. By bringing together the people who practice Agile in the company, I made the following discoveries and learned the following lessons as a facilitator. Scrum checklists can be used to quantitatively visualize a team's level of Scrum Listening to other teams at different stages of their Scrum journey can provide an opportunity for improvement and courage in our activities Connecting Scrum Masters from different teams created an opportunity to find like-minded individuals to ask for advice on various issues. We were able to find issues that were common across teams (such as the need to improve communication with POs and to split teams) I did not participate actively as much as I was focused on facilitation, but when I heard a participant say, "I was on the brink of giving up, but learning about everyone's activities encouraged me," I was almost moved to tears. In the face of Agile challenges in my career, I augmented my solutions and empathy by engaging in external study groups and reading relevant books. I think it's always great to be able to share these challenges within the company and have someone to discuss issues with. 🏔Summary: Which Agile Milestone Are We at Now? At KINTO Technologies, our development approach adapts to the nature of the project. For large-scale projects, Waterfall is more common, and we use Agile for other project types. This time, we tried to visualize the level of Agile within the company from the perspective of Scrum, and found that each team has various ways of approaching Agile and their issues. So... which Agile milestone are we at now? To this question, we found no clear answer! (Sorry!) However, I feel that by gathering with other Scrum Masters, we went further down the Agile path together! ✨ What I Want to Do in the Future I am in a cross-sectional team called the Project Promotion Group. I know this is a bit presumptuous since I just joined the company, but I hope to use this as an opportunity to help promote cross-team development through initiatives such as Scrum of Scrums and reflection of reflections (meetings where team improvements are shared with other Scrum Masters). Agile Samurai ends with the words, "It doesn't matter if it is Agile or not!" I would like to continue to kaizen as much as I can and continue climbing Mount Agile together with all of you. Be Agile! Thank you for reading this article.
アバター
Introduction Hello! I am Uemura from KINTO Technologies' Development Support Division. As a corporate engineer, I am mainly responsible for mobile device management (MDM). We recently held a "case study presentation & roundtable study session, specializing in the field of corporate IT" under the title " KINTO Technologies MeetUp! - 4 cases to share for information systems by information systems ." In this article, I will introduce the contents of the case study "Advancement of Windows Kitting Automation: Introducing Windows Autopilot" which was presented in our recent study session, along with supplementary information. What is Windows Autopilot? Windows Autopilot is a way to register Windows devices in Intune. By pre-registering the hardware hash (HW hash) of the device, it is automatically registered in Intune during the setup. I would like to talk about how we introduced this Windows Autopilot into KINTO Technologies' environment to improve the efficiency of PC kitting. How Windows Autopilot automates kitting Firstly, I will explain the mechanism of kitting automation with Windows Autopilot. The vendor or administrator should register the HW hash of the PC in Intune in advance before kitting. Then, the user (or administrator) starts the PC and signs in. By pre-registering the HW hash, the PC will be automatically registered in Intune. By creating a dynamic group that includes the PC registered with Windows Autopilot, any PC registered in Intune will automatically enrolled to that dynamic group. By assigning the dynamic group shown in step 3 to each configuration profile and app deployment settings, device control and app deployment will be performed automatically. As Windows Autopilot itself is responsible for the device registration function, it falls under step 1 and 2. Dynamic groups should be utilized so that profile control and app deployment of registered devices can be done automatically at step 3 and 4. In other words, it is possible to automate kitting by configuring not only Windows Autopilot registration settings but also device control settings and app deployment to run automatically. Introducing Windows Autopilot It took us about a month and a half to introduce Windows Autopilot, including research and verification. In KINTO Technologies' environment, the HW hash registration of all PCs had already been completed, so only the following two things were done this time. Assigning the Autopilot profile, which is the first thing to be executed during kitting, to the HW hash that had already been registered. Replacing static groups for kitting that have been used for kitting with dynamic groups Autopilot Profile Configuration Assigning the Autopilot profile to the HW hash determines that the corresponding PC is registered in Intune by the Windows Autopilot method. As for the contents of the profile, it is possible to set whether or not to skip the selection such as "Language Setting" and "Windows License Agreement" that are mainly selected on the PC setup screen. Dynamic Group Configuration Since Autopilot-registered devices have an Autopilot device attribute, set this attribute to a dynamic membership rule. *For details, please refer to the following Microsoft site. Create a device group for Windows Autopilot | Microsoft Learn Then specify the dynamic group you created to the "Assign" in configuration profile and app deployment settings. This makes it possible to automate the process from device registration to device control. This completes the kitting automation with Windows Autopilot. Results of Introduction How effective has the introduction of Autopilot been? As a quantitative result, we were able to reduce the work items by about 40% compared to before the introduction. On the other hand, we were not able to achieve that much efficiency in working hours due to the significant time required for installing apps and performing Windows Updates. As a qualitative result, the automation and simplification of kitting process has made it less likely to cause human errors, such as work omissions. Conclusion Ideally, I would like to achieve so-called zero-touch kitting, but if asked whether it has been achieved with the introduction of Autopilot, manual work is still necessary. However, I think that being able to automate a series of processes from device registration to device control has greatly improved the efficiency of PC kitting. We will continue to incorporate new features in our ongoing efforts to further improve efficiency!
アバター