TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

936

Introduction Hello! This is Wada ( @cognac_n ), a data scientist at KINTO Technologies. In January 2024, KINTO Technologies launched the " Generative AI Development Project team " and I was assigned to be a team member. This article serves as an introduction to our project. What is Generative AI? Literally, it refers to an artificial intelligence that produces new data. The release of ChatGPT by OpenAI in November 2023 thrust it into the spotlight. AI has experienced a number of temporary booms(*1), the emergence of the fourth AI boom(*2), driven by the development of generative AI, has gone beyond a mere boom and is beginning to take root in our daily lives and work. I believe that the use of generative AI, which will become increasingly popular in the future, will have such a significant impact that it will overturn many of the conventional norms in our daily lives and work. Past Initiatives The project was launched in January 2024, but we have been working on the use of generative AI for a long time. Here are just a few of our efforts: AI ChatBot developed in-house as a SlackBot for internal use (article in Japanese) External hands-on event held on the topic of generative AI (held in Nagoya, Japan) Internal promotion of generative AI tools DX of customer center operations using generative AI From planning to development of new services using generative AI And so on. However, there were many initiatives that unfortunately could not be undertaken due to our lack of resources... But now that the project has officially been established as a team in the organization, I believe we will be able to promote the use of generative AI even more broadly. I am very excited for what’s to come! What Our Project Aims For Our Mindset What we value is “contributing to the company's business activities” through technology. Our goal is to solve internal issues with overwhelming “speed, quality, and quantity” as a “problem-solving organization”. Instead of merely trying things out and critiquing, we will continue to work as an organization that focuses on value! The Impact We Want To Have On Our Company We aim to become a company where the use of generative AI is normalized by each and every employee! ... but how to arrive to that point? We could by realizing which tasks are suited for and what can be entrusted to a generative AI. By learning how to create basic prompts depending on the task. By creating a culture of acceptance of AI-generated output. Maybe elements such as the above could help. In the rapidly changing world of generative AI, what kind of shape should we aim for? I think we need to keep thinking about this. To Do So The project is currently dividing its initiatives on generative AI into three levels. Level 1: With the existing systems, "Give it a try" first Level 2: Create more value with minimal development Level 3: Maximize the value added to the business The following is an image of the level classification and how to proceed with the initiative. Level classification of initiatives on generative AI Estimating the value of initiatives while aiming for the appropriate level This does not mean that all initiatives should aim for Level 3. If sufficient value can be created at the Level 1 layer, there may be no need to spend cost and man-hours to take it to Level 2. The key is to try lots of ideas for quick wins at Level 1. For that purpose, it is ideal that all employees, including non-engineers in the company, have a high level of AI literacy to implement Level 1. What We Want To Work On In The Future From An Assisted Form of "Let’s Give It a Try" It has been several months since the introduction of in-house generative AI tools, but we still hear people saying that they don't know what they can do or when to use them. First of all, as whose with the expertise in generative AI, we are planning to increase the number of use cases where generative AI could be applied, while providing careful support for identifying its suitable tasks and in writing effective prompts. At first, with careful support, we encourage giving ideas a try Increase the number of in-house use cases of generative AI Make in-house use of generative AI the norm Towards An Autonomous Form of the "Let’s Give It a Try" Formula If we continue with the above setup, our capacity will soon become a bottleneck and the problem-solving won't be scalable if we're constantly providing assistance. We would like to encourage those responsible for operations to recognize tasks suitable for AI and to entrust them to generative AI by 'trying out' the Level 1 model with basic prompts for this purpose. Enabling operational teams to make use of Level 1 themselves We instead offer advice and consultancy for improving Level 1 ideas, or on how to take them to Level 2 models Trainings To Achieve These Goals We will enhance in-house training to raise the level of AI literacy among employees. The goal is to foster a culture where many employees share a common understanding of generative AI, enabling smooth conversations about its use and acceptance of its output. Enhance in-house IT literacy trainings Tailor trainings according to job type and skill level Conduct trainings with fine granularity, such as image generation, summarization, and translation Provide trainings that are truly necessary based on trainees' feedback, with a quick turnaround time Sharing Information We share our initiatives across various media platforms, including this Tech Blog. We plan to release a variety of content, including technical reviews of generative AI and introductions to the project's initiatives. We hope you look forward to it! Conclusion Thank you for reading my article all the way to the end! It was a lot of abstract talk, but I hope this will be helpful to those who, like us, are seeking to leverage generative AI. References [*1] Ministry of Internal Affairs and Communications. "History of Artificial Intelligence (AI) Research". (See 2024-01-16) [*2] Nomura Research Institute. "Future landscapes changed by Generative AI". (See 2024-01-16)
アバター
はじめに こんにちは! KINTO テクノロジーズ プロジェクト推進 G の Ren.M です。 私は普段、 KINTO ONE(中古車) のフロントエンド開発をしています。 今回は TypeScript の基礎である型定義についてご紹介させていただきます。 この記事の対象者 TypeScript の型定義について学びたい方 JavaScript の次に TypeScript を学びたい方 TypeScript とは そもそも TypeScript とは JavaScript を拡張した言語になります。 そのため JavaScript と同様の構文が使えます。 従来の JavaScript ではデータ型の宣言が不要である種自由にプログラムを記述することができました。 しかしプログラムの品質向上などが求められ、型の不一致などを防ぐ必要があります。 そこで静的型付けを用いる TypeScript が使用されるようになりました。 型定義について理解することでスムーズにコーディングができ安全なデータに受け渡しをすることができます。 JavaScript との違い JavaScript では以下のデータ型の異なる代入が可能です。 let value = 1; value = "Hello"; しかし TypeScript では以下ような挙動になります。 let value = 1; // number型でないため代入不可 value = "Hello"; // 同じnumber型のため代入可能 value = 2; 主なデータ型の種類 // string型 const name: string = "Taro"; // number型 const age: number = 1; // boolean型 const flg: boolean = true; // array string型 const array: string[] = ["apple", "banana", "grape"]; : の後に型を明示的に定義することを「型アノテーション」と呼びます。 型推論 TypeScript では上記のように型アノテーションを使わなくても自動で型をつけてくれます。 これを型推論といいます。 let name = "Taro"; // string型 // Bad:nameはstring型のため代入できない name = 1; // Good:string型のため代入できる name = "Ken"; 配列の型定義 // number型のみ許容する配列 const arrayA: number[] = [1, 2, 3]; // number型 or string型のみ許容する配列 const arrayB: (number | string)[] = [1, 2, "hoge"]; interface オブジェクトの型定義は interface を使用できます。 interface PROFILE { name: string; age?: number; } const personA: PROFILE = { name: "Taro", age: 22, }; 上記の age のようにキー要素に後ろに「?」を付与することでプロパティを任意にすることもできます。 // 'age'要素がなくてもOK const personB: PROFILE = { name: "Kenji", }; Intersection Types 複数の型を結合したものを Intersection Types と言います。 下記だと STAFF が該当します。 type PROFILE = { name: string; age: number; }; type JOB = { office: string; category: string; }; type STAFF = PROFILE & JOB; const personA: STAFF = { name: "Jiro", age: 29, office: "Tokyo", category: "Engineer", }; Union Types |(パイプ)を用いることで 2 つ以上の型を定義することができます。 let value: string | null = "text"; // Good value = "kinto"; // Good value = null; // Bad value = 1; 配列の場合 let arrayUni: (number | null)[]; // Good arrayUni = [1, 2, null]; // Bad arrayUni = [1, 2, "kinto"]; Literal Types 代入可能な値を明示的に型にすることもできます。 let fruits: "apple" | "banana" | "grape"; // Good fruits = "apple"; // Bad fruits = "melon"; typeof 宣言済みの変数などから型を継承したい場合は typeof を使います。 let message: string = "Hello"; // messageのstring型を継承 let newMessage: typeof message = "Hello World"; // Bad newMessage = 1; keyof オブジェクトの型からプロパティ名(キー)を型とするのが keyof になります。 type KEYS = { first: string; second: string; }; let value: keyof KEYS; // Good value = "first"; value = "second"; // Bad value = "third"; enum enum(列挙型) は自動で連番をつけてくれる機能になります。 下記だと SOCCER に 0、 BASEBALL に 1 が割り当てられます。 enum を用いることで可読性が高まり、メンテナンスしやすくなります。 enum SPORTS { SOCCER, BASEBALL, } interface STUDENT { name: string; club: SPORTS; } // clubに1が割り当てられる const studentA: STUDENT = { name: "Ken", club: SPORTS.BASEBALL, }; Generics Generics を用いることで使用する度に型を宣言することができます。 同じようなコードを別の型で繰り返す場合などに役立ちます。 慣習的に T などが使われることが多いです。 interface GEN<T> { msg: T; } // 使用する際にTの型を宣言する const genA: GEN<string> = { msg: "Hello" }; const genB: GEN<number> = { msg: 2 }; // Bad const genC: GEN<number> = { msg: "message" }; デフォルトの型を定義すると <string> などの宣言が任意になります。 interface GEN<T = string> { msg: T; } const genA: GEN = { msg: "Hello" }; また extends を併せて用いることで使用できる型を制限することができます。 interface GEN<T extends string | number> { msg: T; } // Good const genA: GEN<string> = { msg: "Hello" }; // Good const genB: GEN<number> = { msg: 2 }; // Bad const genC: GEN<boolean> = { msg: true }; 関数で使う場合 function func<T>(value: T) { return value; } func<string>("Hello"); // <number>がなくてもよい func(1); // 型は複数でもよい func<string | null>(null); 関数で extends を使う場合 function func<T extends string>(value: T) { return value; } // Good func<string>("Hello"); // Bad func<number>(123); interface と併せて使う場合 interface Props { name: string; } function func<T extends Props>(value: T) { return value; } // Good func({ name: "Taro" }); // Bad func({ name: 123 }); おわりに いかがだったでしょうか。 今回は TypeScript の基礎について一部ご紹介しました。 TypeScript はフロントエンドの現場で使われていること増えてきており、 採用することでデータ型の不一致を防ぎバグの少ない安全な開発ができると思います。 少しでもこの記事がお役に立てば幸いです! テックブログには他に様々な記事がありますのでよければご覧ください!
アバター
Introduction Hello. I'm Chris, and I do frontend development in the Global Development Division at KINTO Technologies. Today, I will talk about a somewhat common problem in frontend development and how to solve it! The Problem Sometimes you want to use an anchor tag ( tag) to make the user scroll to a specific part of a page like below. You can achieve this by giving an id to the element you want to scroll to and adding href="#{id}" to the tag. <a href="#section-1">Section 1</a> <a href="#section-2">Section 2</a> <a href="#section-3">Section 3</a> <section class="section" id="section-1"> Section 1 </section> <section class="section" id="section-2"> Section 2 </section> <section class="section" id="section-3"> Section 3 </section> It’s useful for users when you have long pages like articles and rules. However, there are often fixed elements at the top a page, such as headers, which gets slightly misaligned after clicking on a link and scrolling. For example, suppose you have the following header. <style> header { position: fixed; top: 0; width: 100%; height: 80px; background-color: #989898; opacity: 0.8; } </style> <header style=""> <a href="#section-1">......</a> <a href="#section-2">......</a> <a href="#section-3">......</a> ... </header> I intentionally made this header a little transparent. you can see that some of the content is hidden behind the header after the a-link is clicked on. How To Solve With Just HTML and CSS You can solve this problem by getting the height of the header with JavaScript when the a-link is clicked on, then subtracting the height of the header from the scroll position before scrolling. For this article, however, I want to show you a solution that uses only HTML and CSS. To be more specific, you can prepare another <div> a little above the <section> you want to reach and make the user scroll to that element. Going back to the previous example, we will first create a div tag in each section. Then assign a class to the div tag, such as anchor-offset , and move the id that was originally assigned to the <section> tag to the newly created div tag. <section> <div class="anchor-offset" id="section-1"></div> <h1>Section 1</h1> ... </section> Then use CSS to style the <section> tag and .anchor-offset . /* use classes if you want to add only the elements that need to be anchored */ section { position: relative; } .anchor-offset { position: absolute; height: 80px; top: -80px; visibility: hidden; } With the above settings, when the user clicks on the a-link, they will scroll to a little above the corresponding <section> position (80 px in our example), and the height of the header (80 px) will be offset. How to write in Vue Vue allows you to bind values to CSS . If you use this function to dynamically set the height and make it a component, it will be easier to maintain. <template> <div :id="props.target" class="anchor-offset"></div> </template> <script setup> const props = defineProps({ target: String, offset: Number, }) const height = computed(() => { return `${props.offset}px` }) const top = computed(() => { return `-${props.offset}px` }) </script> <style scoped lang="scss"> .anchor-offset { position: absolute; height: v-bind('height'); top: v-bind('top'); visibility: hidden; } </style> Summary This is how you can adjust the scroll position to match fixed elements such as headers when the user scrolls to a specific part of the page with an tag. Although there are many other solutions, I hope this one helps you!
アバター
Introduction & What’s the story? I am Yuki T. from the Global Development Division. I am responsible for the Operation and Maintenance of products for the global market. Our team members in the Global Development Division come from diverse nationalities and speak a variety of different languages. Among them, my team consists of both team members who don’t speak Japanese as well as those who don’t speak English. So you could say we have made huge efforts (struggles?) to establish communication within the team. In this article I would like to introduce the content of these efforts and the insights gained in the process. Conclusion - If you can’t speak English, Japanese is Okay. - If you can’t speak, at least try writing. But if you do, write it precisely. - It’s hard, but it’s worth the effort. Introduction (What Kind of Team Are We?) I will share now a bit about my team (operations and maintenance) within the Global Development Division. We are 8 members. The member composition and work procedures are as follows. Nationality A mix of full-time employees and outsourcing company members. The team has been around for about a year now. At the beginning, we were only Japanese members. But a foreign team mate also joined after a while. Work Style a hybrid of remote and in-office. Agile Development (Scrum). On the different Scrum events, a mix of remote and in-office members join. Communication Slack for communication, Teams for meetings mostly. Atlassian Jira and Confluence for task and document management. The language proficiency of the members varies, but the majority of them are Japanese (8 out of 6). Classification Language proficiency Number of people A English only. Does not speak Japanese at all (foreign nationality) 1 B Mainly English. Daily conversational level of Japanese (foreign nationality) 1 C Mainly Japanese Daily conversational level of English (Japanese nationality) 2 D Mainly Japanese Can’t speak English. Can read and write a little (Japanese and foreign nationality) 4 By the way, I (of Japanese nationality) would be "C" above. About TOEIC 800. I can speak a little bit, but when it comes to complicated discussions, my lack of vocabulary is immediately apparent. And I’m pretty bad when it comes to listening skills. Next (Done - Learned - Next step) When the team was first formed, the group consisted mainly of "C" and "D" level members (hereinafter referred to as "Japanese members"), and communication was mainly in Japanese.[^2] We tried various things to get English-speaking people to "A" and "B" level (hereinafter referred to as "English members") to join this team. Here is a summary of the results from the retrospective method[^3] (Done - Learned - Next Steps). I will divide each situation in three categories: 1. Contact method (Slack), 2. Meetings (Teams), and 3. Documents (Confluence and Jira). [^2]: Since the entire Global Development Department comprises many members of foreign nationalities (approximately 50%), we operate in an environment where a lot of communication and documentation is in English. [^3]: Done - Learned - Next step | Glossary | JMA Consultants Inc. https://www.jmac.co.jp/glossary/n-z/ywt.html 1. Contact Method (Slack) Done "Even if you don’t understand Japanese, you can read it by copying and pasting it into a translation tool, right?" Learned "Translation is readable only at the beginning." It is quite annoying to translate by copy-pasting each and every time (you’ll see if you try it yourself). Even if you are mentioned, there are surprisingly many cases in which it is not really related to you, leading to a loss of motivation for copy-and-paste translations. It feels like a wasted effort. In many cases, Slack also requires reading the entire thread to grasp the meaning, not just single messages. This also seems to contribute to the difficulty of translation. Next step "Let’s write in both Japanese and English." Important messages were also written in English. The point is not to translate entire Japanese texts. I didn’t find any good examples to publish here, but for example: Simplify the issue in English for easy understanding. For details, translate the remaining text independently or inquire about them separately. It is difficult for the sender to translate everything. 2. Meetings (Teams) Done ① "I’ll be speaking in Japanese, so use the Teams translator function to read the subtitles." ② "Even if you’re not good at English, try your best to speak in English!" ③ "OK, then I’ll translate everything!" Learned ① "I don’t understand what it means even after reading." The conclusion is that the accuracy of machine translation between colloquial Japanese and English is still low. In particular, Japanese in a casual meeting with a small number of people has various adverse conditions for machine translation, such as stagnant speech, ambiguous subjects and objects, and multiple people speaking at the same time. ② "No one is satisfied." With effort, speaking in broken English, yet neither the Japanese nor the English members can understand. Also, if you don’t know what to say in English, you don’t speak in the first place, so everyone became quieter compared to when speaking in Japanese. The meetings ended quicker, but with little information to be gained. ③ "Never-ending meetings" Since I have to speak in English after the Japanese member speaks, it simply took twice as long. In addition, with my English being just a little better than daily conversational level,I often got stuck on how to translate, and that extended the time even more.And while we are speaking in English, the Japanese members would be just waiting. As a result, meetings tended to be sloppy. Next step "If you are not good at English, you can use Japanese" I made it so that people who are not good at English could speak in Japanese. I then decided to focus on the content relevant to the English-speaking members and serve as the interpreter. This has helped to keep meeting times as short as possible. "If you can’t speak, you can at least write about it." But if this is all, the amount of information conveyed to English-speaking members will be reduced. So I instructed them to write meeting notes with as much detail as possible. By doing so, even if you do not understand it on the spot, you can read it later using the browser’s translation function. Incidentally, because we write down the words as we hear them, the notes may be a mixture of Japanese and English. "Still, effort is required." Still, there are situations like Sprint Retrospectives where you have to convey the meaning in real time and not later. In such cases, I add translations on the spot, even if it takes time. For example (in blue) ![Example of Retrospective comment](/assets/blog/authors/yuki.t/image-sample-retro.png =428x) In the case of Sprint Retrospective, while everyone is verbally explaining their ideas on Keep or Problem, I make good use of the gap time by adding translations. 3. Document (Jira and Confluence) Done "I’ll write it in Japanese, so use your browser’s translate function to read it." Learned "Confluence relatively is OK, but Jira is a bit tough." Design documents and specifications mainly on Confluence are translated relatively well. Also, many of the documents of the Global Development Division are originally written in English, so there is no need to worry about that. However, the translation accuracy of the comments on Jira tickets was poor. The main reason seems to be that unlike official documents, due to how Japanese sentences are structured, comments on tickets often omit the subject or object in them. There are also personal notes left in Japanese that not even native Japanese speakers would understand, so in a way it is natural in some cases. Next step "Write accurately and concisely" So we tried to write without omitting the subject, predicate, and object. We also tried to write as concisely as possible (bullet points were recommended). This increased the accuracy of the machine translation on browsers. Gains Thanks to these "next step" initiatives, communication within the team is now functioning to some extent. In addition, the following benefits were also found. More Information on Record We all developed the habit of taking notes, even for small meetings. As a result, we have less trouble checking previous meetings and asking ourselves, "Do you remember what was the conclusion that time?" Less Tacit Understanding To translate into accurate English, it is necessary to clarify the subject and object implied in the Japanese context. This provided us with more opportunity to clearly define "who" will undertake this task and "what" is the target of the change. If you try, you’ll realize how surprisingly often the "who" and "what" are not clearly defined in meetings. In such situations, you will have more opportunities to check, "Was XX-san in charge of this?" This can also reduce the number of tasks left unadressed. Moreover, I sometimes hesitated to inquire, "I wonder if XX-san would do it, but I don’t feel comfortable asking..." but having the purpose of "translating into English" made it easier to clarify such questions. More Diverse Opinions can be Expressed/Obtained I feel that the reduction of tacit understanding and clear communication has led to "being able to say what we want to say and express diverse opinions." In addition, we are now able to incorporate more opinions from English-speaking members, which has allowed us to gain perspectives that would be difficult to notice from Japanese-speaking members alone. For example, the following idea from Try ![Example of retrospective comment](/assets/blog/authors/yuki.t/image-sample-retro.png =428x) This was a Try from a Problem that said, "I didn’t accurately write the background and purpose of the task in the ticket," which comment is pretty serious (sorry for that) as is common in Japan. In comparison, the second English-speaking member’s suggestion to "Let’s approach it calmly" came from a completely different perspective, which made me think, "Hm, I see." Summary It takes a lot of effort to communicate when multiple languages are involved. However, I feel that these challenges not only affect immediate communication but also lead to new insights and the creation of proactive opinions. "Diversity is a benefit and an asset, not an obligation or cost." With this in mind, I am committed to furthering this effort.
アバター
try! Swift Tokyo 2024のスタッフしてきました 子育てがちょっと落ち着いてきて、そろそろ外部活動していきたいと思っていたところに try! Swift Tokyo 2024の当日スタッフを募集していたので、応募してみました! 実は参加者としても参加したことがなかったので、会場の雰囲気もわからない状態で申し込みました 😅 今回は、そのスタッフとしての活動について書いていきたいと思います。 try! Swift Tokyo 2024とは try! Swift Tokyo 2024は、2024年3月に開催されるiOS開発者向けのカンファレンスです。 2016年から開催されている、iOS開発者のためのカンファレンスとして国内最大規模のイベントです。 COVID-19で長いこと中止になっていたんですが、ついに今年、実に5年ぶりに開催されることになりました。 詳細は 公式サイト をご覧ください。 僕の感覚として、同じように大きなiOSカンファレンスで有名なのがiOSDCですが、日本国内をメインにプロポーザルを募ってタイムテーブルを作るのに対して、 try! Swiftでは海外からもプロポーザルを募って、海外の著名なエンジニアに来てもらってタイムテーブルを作っているので、英語でのコミュニケーションが必要になる場面が多かったです。 スタッフ活動 今回は、当日スタッフとしての活動を行いました。 裏方としての活動は初めてだったのですが、とても刺激的で楽しい経験でした。 開催一週前にスタッフ全員の顔合わせ。 そのときに担当の割り振りがありました。 僕の担当は会場担当で、具体的に以下のようなことを行いました。 会場の準備 参加者の誘導 会場の案内 お昼の弁当配布 ゴミ回収 会場の撤去 その他、会場内での雑務 ![](/assets/blog/authors/HiroyaHinomori/IMG_2773.jpg =400x) 普段はプログラムを書いていることが多いので、3日間身体が持つか心配でしたが、それよりも体を動かし、人と接する作業は新鮮でした。 特に、参加者の受付や会場の案内は、参加者と直接コミュニケーションを取ることができて楽しかったです。 ただ、このtry! Swiftは海外のスピーカーや参加者も多く、英語でのコミュニケーションも必要だったので、 自分の英語スキルの足りなさを痛感させられました。 5年ぶりの開催ということもあり、スタッフも僕を含め新しいメンバーが多く、初めは戸惑うことも多かったですが、 1日目を終えるころには、みんなで協力して楽しく活動できるようになりました。 ![](/assets/blog/authors/HiroyaHinomori/IMG_2784.jpg =400x) 2日目の会場撤収時に残ったスポンサーボードに色んな人がサイン書いてくれていたのがまた良い感じでした 👍 撤収後に参加したAfter Partyでは、参加者とスタッフが一緒に楽しむことができ、そこでも新しい出会いがありとても刺激になりました。 最後の3日目は、ワークショップが開催され、参加者のみなさん熱量高く参加されていたので、見ているこちらもモチベーションが上がりました 💪 ちょっと時間ができたので、スタッフ間の情報交換をしたり良い時間を過ごせました。 完全撤収後の打ち上げで食べたシュラスコも美味しかった 😋 おわりに ![](/assets/blog/authors/HiroyaHinomori/IMG_2804.jpg =400x) もっと写真を撮りたかったんですが、作業に集中してほとんど写真を撮れなかったのが悔やまれます... スタッフとして参加することで、 参加するだけでは絶対に得られない新しい出会いや刺激を得られました。 とても良い経験だったと思います。 次回もチャンスがあればスタッフやりたいです! ぜひこの記事を読んでいるみなさんもカンファレンスのスタッフにチャレンジしてみてください! Finally, I'd like to say THANK YOU to all the organizers, speakers, and other participants!!! See you again 👍
アバター
We Held an Internal LT Event! Hello, I am Ryomm. I joined KINTO Technologies in October 2023. I am mainly in the iOS team developing an app called my route by KINTO . We held an internal Lightning Talk (LT) event at our Muromachi Office in Tokyo. So today, I’m delighted to convey this experience to you through this report! Event Background At a one-on-one meeting with my boss, we talked about how we would like to hold casual Lightning Talks since we haven’t had the opportunity to speak in front of others recently. I learned that other offices are doing it under the name of information-sharing meetings. So (On November 21), I posted on my Slack channel how I wanted to hold this event, and the conversation progressed without a hitch. Post on times (On November 27) A kick-off meeting was held by a group of volunteers. On the Committee Channel (On November 29) An announcement was made at a meeting attended by all employees, informing them that the venue will be the Muromachi Office. ![Muromachi Office Channel](/assets/blog/authors/ryomm/2023-12-28-LightningTalks/03.png =400x) Muromachi Office Channel A cute flyer was made! (On December 14) The timetable was announced. ![Cute timetable](/assets/blog/authors/ryomm/2023-12-28-LightningTalks/05.png =300x) A cute timetable was made! (On December 21) The LT event was held! At the venue The Lightning Talk came together really fast, just a month after we first talked about wanting to do it! Thanks to the active participation of the Tech Blog team and many others, I think it was a very enjoyable meeting. In addition, with the help of the Corporate IT Group, we successfully conducted a comprehensive Zoom live stream for the event! I was worried that we wouldn’t be able to get enough speakers, but we ended up with 12 willing participants in the end. (Some of them even came all the way from Nagoya!) At first, we started organizing it informally and just out of fun, so I believe the LT event was possible thanks to the collaboration of everyone involved. Lightning Talks The talks were casually organized in various ways since they were for internal use only, meaning not all contents can be shared publicly. Nevertheless, here’s a summary of a few. Tag Based Release with GitHub Flow+Release Please ⛩ (Torii’s) Lightning Talk. His LT explained each of GitHub Flow (including comparison with Git Flow), Tag Based Release, and Release Please, and how integrating these would be useful to automatically generate CHANGELOG and simplify version control. I thought that I would like to try Release Please, because it addresses my concerns of not wanting to release certain features yet since development of another version is currently ongoing. ⛩’s LT A Fan’s Way to Enjoy Formula 1 (F1), The Pinnacle of Motorsports mt_takao’s Lightning Talk. It was a LT introducing the excitement of F1. This LT is unique to an automotive company! He emphasized that in the realm of F1, even a one-second gap is noteworthy, and the strategies to bridge the 1/1000th of a second difference are what adds intrigue to F1! The last part of his talk was an info sharing about the "2024 FIA F1 World Championship Series MSC CRUISES Japan Grand Prix Race" to be held at the Suzuka Circuit in Mie Prefecture from April 5 (Fri.) to 7 (Sun.), 2024! Wow! It was an LT that definitely made me want to attend a race in person! mt_takao’s LT Toward the January Development Organization Headquarters Meeting Aritome’s Lightning Talk. It was an LT about his career journey, the lessons he gained along the way, his thoughts on KINTO Technologies and the strategies for working energetically and with vitality. I am also looking forward to the first large-scale in-house event in January! Why Don’t You Let People Know About KINTO Technologies? HOKA’s Lightning Talk. She talked about her own experience as a public relations professional and her efforts to raise awareness at KINTO Technologies, as well as his involvement in organizational human resources and recruitment. Additionally, she asked for cooperation in promoting KINTO Technologies. The easiest way to do so is to reposting via X, so I did it right away. 😎✨ Hoka’s LT Sketchy Investment​ Hasegawa’s Lightning Talk. Studying English is a super high return investment! It was a LT encouraging Hasegawa’s style of study and English learning! At the end, it was concluded with a request to consider introducing English learning assistance to the company, and the audience got excited. It also made me think I should study English too. It was a very energetic LT! Hasegawa’s LT Lowering the Bar for LT Speakers + Announcement of Agile Meeting Kinchan’s Lightning Talk. He defined LT as a place to convey what you like or what you find great, and by conveying "your attributes × your likes and specialties," you can create a fun LT with originality! It was an inspiring talk that motivated participation in LTs! Perhaps because of this LT, about 70% of participants expressed interest in speaking at the next LT in the post-event survey. It was also the hottest LT that received the most votes in the "Best LT" poll. Kinchan’s LT The Impact of Tech Blog Posting on Your Career Three Years Later Nakanishi’s Lightning Talk. He talked about how continuing to write on the Tech Blog can improve your skills and potentially lead to book publications. He brought his real publication - a book of 1087 pages! I was surprised that consistent contribution on the Tech Blog eventually lead to publishing such a substantial book. https://www.amazon.co.jp/dp/486354104X/ Let Bookings Do Troublesome Scheduling A Lightning Talk by Gishin of technology and trust. This LT was about Microsoft Bookings, which makes it easy to create booking sites, Teams/Outlook integration, surveys, reminder emails, as well as scheduling! It was actually used in this year’s medical checkup, and there were some participants who responded like, "Ooh, you mean That one!" (It was before I joined the company so I have not personally seen it...) It was also interesting to hear that the service based on Exchange had caused instability and unexpected situations. I thought I should use it next time we organize a drinking party! Already in use! A LT that Makes You Want To Go See "Motorsports" in Five Minutes Cinnamoroll’s Lightning Talk. He picked up seven recommended motorsports to watch in Japan, and encouraged us visit the circuits to see them in person! He emphasized that the sound of the engines, the realism and intensity, the actual course conditions, the exhibits at the venue, and the transportation by car are all unique to the circuits that should be experienced, at least once. And going to the circuits by car is definitely recommended. If you like cars, you should also subscribe to KINTO and use the easy application app! Mr. Cinnamoroll did not forget to advertise our service too. LOL. He delivered his talk wearing the team uniform sponsored by KINTO, and it was a great LT that conveyed his passion! Cinnamoroll’s LT I Made Dumb Apps and Stuffed the Chimney♪ Hasty Ryomm Santa Claus’s Lightning Talk. My LT was about the Advent Calendar I posted for personal development. I shared my own experiences participating in the "Dumb App Hackathon" and "Dumb App Advent Calendar." After I gave this LT, some of you tried the Dumb App Advent Calendar and completed it in 3 days, which was great! Ryomm’s LT Someone wrote the Advent Calendar after listening to my LT https://speakerdeck.com/ktcryomm/kusoapurizuo-tuteyan-tu-jie-meta Corporate IT: A new journey! Flo’s Lightning Talk. This LT was about her career, her work in the Corporate IT Group, and her future prospects! She was amazing because she said her job is to ensure that KINTO Technologies’ employees work happily. LOL Conclusion By limiting the event to in-house members, we were able to create an opportunity for casual public speaking, and I think it was an enjoyable event that also provided an opportunity for employees to interact with each other. It must have been a great success as a launch event for the Muromachi information sharing meeting! Regarding my personal initiative, I noticed that KINTO Technologies was relatively quiet on Slack. So I also created a live channel to encourage more active text communication, both online and offline. At the same time, the event record was kept so that it could be reviewed later to observe the reactions to the LTs. Additionally, the excitement of the event was visualized to draw interest from non-participants for future events. We gathered survey responses into spreadsheets using Slack Workflow, ensuring that participants could enjoy the event without needing to leave Slack as much as possible. As a reflection point, I disabled the chat for the Zoom webinar used for streaming, but left the reactions enabled, so there was a certain number of people just watching. I would like to make use of it from next time. As it was the holiday season this time, we prepared Christmas costumes and Chammery (a non-alcoholic sparkling wine). As a surprise discovery, we realized that planning food and drinks was easier to decide when matching them to the events being held! We have received requests from both participants and volunteers to hold a second LT event, so I would like to connect the next one in with some other event. We had a year-end party after the LT event, and it was great to hear people from other offices say that they watched the stream. The presenters also said that they were happy to receive feedback, so I am glad we held the event! It would be great if we could continue to set up these kinds of casual presentations and hear stories from different people in the company.
アバター
​ Hello! ( º∀º )/ This is Yukachi from the Budget Control Group and the Tech Blog project team. Today, December 24th, is the Arima Kinen! My fave is Sole Oriens🏇 Given that all the horses are at a high level, I look forward to seeing what stories they'll bring to this year! How are you these days? Back to the topic, today I'd like to write about our (self-proclaimed) U-35 team members’ Meetup with our President. Until the meetup I joined the company in December 2020, a full three years ago! I thought that the number of employees was small at that time, so I checked the number and their age ranges. 20s: 5, 30s: 14, total: 58 employees (in the former Development Organization Division of KINTO Co., Ltd. before the establishment of KINTO Technologies Corporation) ! !! ! Too little...! ! And as of now, 20s: 43, 30s: 131, total: 360 employees! What a growth...! !! Regarding the behind-the-scenes of our growth, you can check HOKA 's article. ☟ Let's Make a Human Resources Group: How the Organization Rapidly Gained 300 Employees in 3 Years | KINTO Tech Blog When we were less people, we used to have company-wide drinking parties, fostering cross-department connections more often. But with the current large number of people, we are limited to individual group gatherings. The constant influx of new hires makes it challenging to keep track of who is who... However, Chimrin , known for coming up with great ideas, planned this event! She always says, "Let's try this!" and comes up with various ideas. "We don't have the opportunity to interact directly with the president!" "I want more involvement with other divisions!" In response to these voices, "Let's host a exchange event between the president and young people!" This is how we came to host the meetup. We all work together to give it shape. And so it goes! This time, President Kotera shared with us various aspects of his life, including his school days, past career, current position, and future vision. This part lasted one hour, but many people in the survey said they wanted to hear more. Kotera-san is indeed a good storyteller! Well, I took some good photos for this article so I'm going to create a corner to show you. Hopefully, it will help convey the atmosphere! Excellent moderator, thank you! Interaction between different divisions that are not usually involved! New hires! Kotera-san was speaking earnestly, so I couldn't turn the camera on him much. Triple Yuji People tend to take photos at the photo spot. Everyone pitches in and cleaned up together cheerfully. Even after the meetup ended and Kotera-san left, the excitement lingered. A similar gathering was arranged for Manager Ueda, who came to check on his team members as they didn't come back. He began with, "I'm delighted this happened," and spoke passionately. When I asked the employees in their 20s who didn't make it this time for the reason of absence, they replied, "I thought it would be a more formal meeting." "I was scared to talk to the president!" As you can see from the pictures, it was a relaxed gathering, and Kotera-san is super friendly and enjoys socializing with different people over a drink! Selected survey comments! I learned that KINTO services are consistently good and emphasize the importance of car subscription. I thought it is difficult to create new services, but I learned the importance of continuing and connecting with people. Hearing about the origins of KINTO Technologies directly from Kotera-san, who has been involved in its establishment, helped me to understand it with a sense of reality. I felt that I was able to get to know Kotera-san better through his stories, his way of speaking, and his facial expressions. I used to work thinking only about the future, but after listening to Kotera-san's way of working, I realized that it is more important to do my best in the present. It was an opportunity to get to know Kotera-san's personality as well as to get to know people around my age. It was nice to be able to interact with people who I do not have any involvement in my daily work. I was able to share similar concerns with people in my age! I'm glad to see that the survey results suggest a high level of satisfaction. Also Kotera-san mentioned in his closing remarks, "Honestly, it was a lot of fun!" The second event is already scheduled to take place! ! Conclusion Relations with other divisions tend to be biased and it is difficult to get involved without a chance to do so. I am not an engineer, but I get lots of support from engineers of my age. For example: Chimrin 's article, ✏ A Story of Simplifying Book Management Methods Comes from someone saying: "This looks like a lot of work to manage. I wonder if using Jira would reduce the burden on Yukachi." So this is a story of love and inspiration that was created by a group of dedicated volunteers! Let me boast about it here. As Kotera-san mentioned during this meetup, human connections are vital in the workplace. The more people you know in various departments, the easier you can seek advice and support around when encountering challenges and fostering collaboration to accomplish tasks effectively, which leads to a better work environment. I think it is good to have as many chances as possible to make peers this way, so I'd like to continue to provide such opportunities together with everyone! Well, thank you for reading my article all the way to the end! Merry Christmas!
アバター
Introduction When I mention that I had no experience in the web industry before joining KINTO Technologies, I get looks of surprise. The puzzled look on their faces, wondering how someone with such a career could (and made it to) join the company. What’s more, this guy is in his 40s, unlike young ones here in their 20s! Walking in a Different World Originally, I worked as a programmer developing embedded software for home appliances. From there, I experienced working in control software for automotive ECUs. In my previous job, I was a project manager at a European company, introducing test equipment for engines. The software was just one part of the entire system, which comprised various elements such as mechanical, electrical, measuring, fluidics, simulations, and more. From a world of neutral grounding system of three-phase four-wire distribution system, pressure drop and torsional vibration analysis of heat exchanger for flange connection of piping and CFD , to a world of modern architecture in the cloud. It’s been a little more than three years since I jumped into a completely different world. That is why today I’d like to look back on the journey I’ve taken. ![a person in his 40s with no web experience](/assets/blog/authors/hamatani/20231223/40s_beginner.jpg =480x) Here’s what it looks like to have a 40-something web inexperienced person described by generative AI Work at the Production Group I was assigned to the Production Group, not in an engineering position where I would actually be coding web systems. Internally, the Production Group is called "Pro G." Currently, we are four people working and one team member is on childcare leave. We are the smallest group in KINTO Technologies. Boundary Spanner When talking about the role of the Production Group, I think the closest thing I can think of is that we are boundary spanners who connect people and organizations. Our job involves collaborating with members of the business department to identify the type of system needed to achieve KINTO’s goals and connecting them with the system development department. Among these tasks, I am mainly in charge of conducting "system concept studies" in the most upstream process of large to medium-sized projects. ![Production Group](/assets/blog/authors/hamatani/20231223/produce_group.jpg =480x) The Production Group connecting business and systems (They all look so young) Do Not Overengineer So what kind of systems should be developed? While it is a prerequisite that business needs are met, I do not think that is enough. It is crucial not to overengineer. The initial requests for systemization tend to be rich and packed with a lot of content. Out of those, we try to understand the core requirements of the business side, identify the functions that are really needed, and bring them to a development scope that is necessary and sufficient. Most of the businesses that KINTO handles are unprecedented. Even if you imagine a system in your mind beforehand, it may not be all that useful in practice. There are things that you can only find out by doing. I think it is best to start small with the minimum necessary system first, then grow the system step by step as the business grows. Also, since the in-house development teams are a valuable asset of KINTO, its resources must be utilized without waste. Each project must be allocated efficiently according to its priority and target dates. By making the development scope compact, we can develop as quickly as possible and bring products to the market as soon as possible. This sense of speed is part of KINTO’s culture and its strength.  Colleagues in charge of the business side may be unfamiliar with creating requests to the system development side. Although they may have used systems before, many of them have never been involved in creating one from scratch, and this situation could be completely new to them. It is also said that developing a system from scratch is similar to building a house. But it’s a house that not even the owner knows exactly how he wants it, as it’s his first time building one. Production Group also plays a role in leading such team members. Because we are an in-house development organization, we can work with and sometimes lead the business department side on an equal footing, proposing a balanced system that is not overengineered. This awareness has spread throughout KINTO Technologies, but I think the mission of Production Group is at the core of that.  A Hunting Tribe The Production Group is basically like a hunting tribe where each goes out to find their own prey (projects). At a group meeting, everyone’s eyes light up when they hear about a new large-scale project coming up. Then, just like the Neanderthals discovering a mammoth on the other side of the mountain, we quickly get ready to embark on the hunt. The first brave person who jumps on the prey becomes the main person in charge of the case. This has become a customary practice. Note: Assignments may also be made based on location and areas of expertise. Use Half, Let Half Go I think it is best to travel as light as possible when you begin venturing into a new world. Don’t lean fully onto your previous experiences. When you start afresh in a new place, you may think, "I should make the most of my previous experiences." That’s because it’s always a bit scary to just jump in unarmed. You’re expected to be ready to fight immediately. I feel that the only thing that alleviates the anxiety of being in a new environment are past experiences. However, if you’re overloaded with your past experiences and values, you leave little room to assimilate new things. Past performance can no longer be changed, so holding on to them is like having roots grown out of your feet. When I first changed jobs, I had failed because of that.  It Will Be Helpful Even If You Forget So, let go of half. Things like persistence or style are the first to be let go of. Isn’t the right balance to make the most of half and let go of the other half? "Let go" means to "leave or forget for now", not to "lose or deny." Even if you usually don’t think about it, when needed, the drawer where you left them will pop open and help you when you need it. No one can steal that from you, so you don’t have to keep holding onto them tight. Thinking this way makes things easier to me. ![Experience Stock](/assets/blog/authors/hamatani/20231223/stock_of_experience.jpg =480x) Sorting through your own stock of experience Feelings of project managers In my generation, there was a saying that rabbits die when they are lonely, but project managers are just as vulnerable to loneliness. I was a project manager at my previous job, and I know that projects are going well, things are good, but when they don’t go well, isolation accelerates. Members, customers, owners, supervisors, finance department, sales department, service department, home country, subcontractors, partnerships, and some even mentioned family, pressure coming from all sides. Before you know it, there’s nowhere to run. Everyone at KINTO is kind, so I don’t think it goes here so far, but it can still become lonely and challenging. So I would like to support other project managers as well.  In most cases, projects are delayed because the baseline gets messy (e.g. scope, schedule, completion conditions). The best thing that I can do as someone from Pro G is to ensure that the project is in good shape before we hand it over. Things will work out in the end In my previous six years alone, I have worked on nearly 40 projects. That’s a lot of projects. There were times when things went wrong and then succeeded, ran over the budget, or moments where the future looked bleak, but all of them were eventually managed to work somehow. Even if you can’t reach the goal as cool as you imagined and you roll in breathless, a goal is still a goal. I believe even if you think that everything is in a dead end, there is always a way out somewhere. It is a wonder to me that I can think this way after having suffered so many painful experiences. Diversity is the norm I’ve worked with many different people. Nationalities varied, from Germany, Austria, France, Sweden, Czech Republic, England, India, Sri Lanka, Malaysia, Thailand, Indonesia, Singapore, Taiwan, to South Korea, among others (for some reason, I had no opportunities to work with people from the United States or China.) As for occupations, I’ve worked with people in software development, mechanical design, electrical work, plumbing, delivery and installation work, sales, finance, procurement, warehousing, general contractors, fire departments, automotive design and development, industrial robotics, and collaborating competitors, etc. It is natural for a variety of people to participate in a project, and it is worthwhile to have them participate because they are different. KINTO also has people with various occupations and habits, which is interesting to see. Making decisions is a project managers’ job My former supervisor said, "The job of a project manager is to decide," and I thought that was true. It is more serious to be blamed for not making a decision than to be blamed for making a mistake in judgment. Not deciding meant not doing a project manager’s job. You may seek advice from your supervisor, but never delegate your decision-making. It is the same as sitting in the driver’s seat and not actively steering the wheel yourself. It was deemed that the moment you leave the decision to someone else, was the time where you should leave the driver’s seat. It was a global company, but I think it was like that for every project manager in any country.  I was surprised because I spent my life with such values. Because project managers at KINTO/KINTO Technologies do not make decisions on their own, but with the agreement of all parties involved. I was quite puzzled by this difference, but in time I came to realize that this is also another way to conduct project management. If my previous job was conducting it in a more direct-management style of Decision and Order , KINTO would be more on the indirect management side, that could be called Facilitation and Agreement . I feel a sense of respect for my teammates as what KINTO is doing is highly advanced. It is simpler and easier to decide everything yourself. However, the approach of "advancing projects through consensus" may suffer from a lack of speed. There is also a risk of entrusting direct "judgments and orders" to an individual. I wonder if it is possible somehow to have the best of both worlds. Phone calls My flip phone served as one of my most used tools at work. Calling people as soon as ideas came to mind. Many outgoing and incoming calls were made, so it was common to have a daily history of dozens of calls. Since project members are scattered all over the country, the only way to reach them immediately was by phone. Even in the middle of the night, I didn’t mind calling. Of course, this is not the case at KINTO Technologies, where smart communication is predominantly facilitated through Slack. When I was surprised by such a natural thing, it felt as if I had slipped through time from the past. However, the one-minute call requirement can sometimes take up to 30 minutes for Slack interactions, so it is necessary to discern when to use each. I’m sure you are all practicing this. Embedded software knowledge I was deeply attached to it, but as expected, I had to let my preconceptions go. Since it is a web system, there is no real-time requirement (there is a request for response speed, but it is different), and it does not operate on Event-Mode-Action state machines. It is also different from a continuous control system like PID. It is essentially hardware-independent, so resource constraints are limited. Therefore, as sad as I was, I had to put my knowledge away in the back of a drawer. AWS SQS reminds me of the FIFO ring buffer made by hand, and it makes me nostalgic. Even so, once there is a point of contact between the edge area, such as software defined vehicle (SDV) and IoT, with KINTO’s cloud, it may come into play in the future. So, my drawer is buzzing with hope. Get the Overall Gist Because it is a different world, you will encounter unknown things just by walking. Understand the alpaca When you first see an alpaca, you may think it looks like a sheep with a long neck. Or some may think it’s like a white-haired camel. It is actually written in Chinese characters as "羊駱駝 (sheep camel)," so both points of view I think are valid. We have no choice but to honestly follow our natural upbringing and intuitive feeling. It is impossible to face an alpaca from the beginning and understand it from scratch. Therefore, I think that it is okay to understand something roughly at first, like "It’s like the XXX I know.  ![Observing alpacas](/assets/blog/authors/hamatani/20231223/alpaca_watchers.jpg =480x) You don’t need to observe that hard! It’s different, but it’s almost the same Rather than focusing on the differences, focusing on the same will help you get used to the other world. I often rely on my experience in embedded software and engine test equipment to make rough understandings. However, it is not about pretending to understand (deceiving others), but rather about feeling like you understand (stopping further investigation for now). When you actually work with knowledge, you have to face the fact that you "don’t understand" it. In other words, it is probably safe to leave it in the shallow end until then. Overall rather than correctness I find it more fitting to grasp the whole picture in a shallow and broad sense, even if it means making a few assumptions or leaving some parts undigested, rather than carefully and correctly understanding one thing at a time by scrutinizing the minutiae. Even if it is a collection of dots, you can somehow sense a hidden story as you look at it (like a constellation?). Or, it can be a clue to get to the place you want to climb (like bouldering?). In particular, I think that the roles of project manager and boundary spanner will help us to grasp that kind of holistic understanding.  To the point where you can dive What can be read from looking at the whole picture is only a hypothesis. 'Assuming it’s XXX, this way we can proceed.' Once a hypothesis comes to your mind, dig deeper if necessary. I have no desire to become a professional in that area (I can’t), but it’s enough to be able to talk to professionals. In my case, that’s how I get things done. A byproduct of the diving is the knowledge which was previously just dots becomes connected and forms a line. In this way, we connect the lines little by little to make a map. How to Spend the Rebellion Phase As you get used to your work, there are things that gradually come up. Discomfort and irritation I think everyone is humble until they get used to their new job, spending their time listening to their surroundings a little timid. As you get used to the work, there are things that will gradually spring up. 'Huh? Isn’t it strange how this company works?' This is a feeling of discomfort that arises from the gap between past and present experiences. It is a very valuable realization in itself, and if it works well, it may lead to some improvements. Maybe it’s just for me, but there is frustration involved. Feeling irritated somehow. Post-transition rebellion I personally call this feeling after changing jobs, "post-transition rebellion." It begins as early as around 3 months and lasts until about the second year. I’ve changed jobs three times, and it always comes. Even if the feeling of discomfort itself is okay, you can’t scatter irritation around you. In my case, I used one-on-one meetings to solve it. I talked to my direct supervisor, two ranks above superior, and three ranks above superior each. In order to be heard, I need to verbalize my discomfort, and in this process I first become objective, so I can calm down a little. It’s tempting to get a little fancy and organize it like a suggestion. When we talk frankly in this way, the sense of discomfort gradually disappears. Sense of distance: two ranks above is the vice president, and three ranks above is the president. Flatness is the appeal of KINTO Technologies. If you sort out the sense of discomfort, some of it can be put into your mission. ![Adult’s rebellion](/assets/blog/authors/hamatani/20231223/rebellious_adult.jpg =480x) generated "Adult’s Rebellion" (the same person on the first page appeared!) Conclusion Perhaps now is the best time to be proactive about new technology and knowledge. When the old drawer opens It is interesting to listen to people in various positions within the company, as well as to attend external events and talk with people from other companies. As I get stimulated and reconstruct my thinking, my rusty old drawer slid open, allowing for a potential chemical reaction between new and old ideas. I wrote bouldering as an example; as you climb just a little bit, the view changes and you start to realize things like what you’ll be able to reach next, or whether you should reconsider because it’s different from what you thought. If I accept such changes honestly, I can enjoy the next change. Conversely, when I felt distressed, it was usually when I was trying to stay there. That’s how we survive "I used to use the Tiger hand spin calculator. " or " I can read punch cards. " When I got a job as a new graduate, those experienced engineers were still active. Both "cloud-native modern applications" and "LLM/Generative AI" will eventually find their way onto the computer history as dead technologies. Technology will change, the roles required will change, too. I can’t imagine what I’ll be doing then, but I hope I will be able to survive tenaciously, by replacing half of myself as I go along. ![Computer History](/assets/blog/authors/hamatani/20231223/history_of_computer.jpg =480x) Former state-of-the-art technology and apples lined up
アバター
Introduction Hello, this is Rasel from the Mobile Application Development group of KINTO Technologies. Currently I’m working in my route Android application. my route is an Outing Multimodal application that will assist you in gathering information about places to visit, exploring various locations on the map, buying digital tickets, making reservations, handling payments for rides etc. As you already know, mobile applications became an essential part of our daily lives. Developers primarily create applications targeting Android and iOS platforms separately, incurring in double the cost for both platforms. To reduce those development costs, various Cross-Platform application development frameworks like React Native, Flutter etc. have emerged. But there is always complains about the performance of these cross-platform apps. They don’t offer performance like natively developed apps. Also, there is always issues and sometimes we have to wait longer to get support of new features from framework developers whenever platform-specific new features are released by Android and iOS. Here comes Kotlin Multiplatform (KMP) to the rescue, which offers native-like performance along with the freedom to choose how much code to share between platforms. In KMP, the Android application is fully native as it is being developed with Kotlin, Android's native first language, so there is almost no performance issues. The iOS part uses Kotlin/Native which offers a performance that is closer to natively developed apps when compared to any other frameworks. Today, in this article, we are going to show you how to integrate SwiftUI code along with Compose Multiplatform in KMP. KMP (Also known as KMM for mobile platforms) gives you the freedom to choose how much code you want to share between platforms, and how much code you want to implement natively. It integrates seamlessly with platform codes. Previously it was possible to only share business logic between platforms, but now you can also share UI codes too! Sharing UI codes became possible with Compose Multiplatform . You can read our previous article on this topic below to better understand the usage of Kotlin Multiplatform and Compose Multiplatform in mobile application development. Kotlin Multiplatform Mobile (KMM)を使ったモバイルアプリ開発 Kotlin Multiplatform Mobile(KMM)およびCompose Multiplatformを使用したモバイルアプリケーションの開発 So, let’s get started~ Overview To demonstrate the SwiftUI integration into Compose Multiplatform, we will use a very simple Gemini Chat application. We will develop the app with KMP which will use Compose Multiplatform for UI development. And will use Google’s Gemini Pro API for replying to user’s query in chat. For demonstration purposes, also to keep it simple, we are going to use the free version of API so only text messages are allowed. How Compose and SwiftUI works together First things first. Let's create a KMP project using Jetbrain's Kotlin Multiplatform Wizard which comes with necessary basic setup of KMP with Compose Multiplatform and some initial SwiftUI code. ![Kotlin Multiplatform Wizard](/assets/blog/authors/ahsan_rasel/kmp_wizard.png =450x) You can also create the project using Android Studio IDE by installing Kotlin Multiplatform Mobile plugin into it. We will try to demonstrate how Compose and SwiftUI works together. To incorporate our Composable code into iOS, we have to wrap our Composable code inside ComposeUIViewController which returns UIViewController from UIKit and can contain compose code inside it as content parameter. For example: // MainViewController.kt fun ComposeEntryPoint(): UIViewController { return ComposeUIViewController { Column( modifier = Modifier.fillMaxSize(), horizontalAlignment = Alignment.CenterHorizontally, verticalArrangement = Arrangement.Center ) { Text(text = "Hello from Compose") } } } Then we will call this function from iOS side. For that, we need a structure which represents Compose code in SwiftUI. Below code will convert our UIViewController code of shared module into a SwiftUI view: // ComposeViewControllerRepresentable.swift struct ComposeViewControllerRepresentable : UIViewControllerRepresentable { func updateUIViewController(_ uiViewController: UIViewControllerType, context: Context) {} func makeUIViewController(context: Context) -> some UIViewController { return MainViewControllerKt.ComposeEntryPoint() } } Here, take a closer look to the name of MainViewControllerKt.ComposeEntryPoint() . This will be our generated code from Kotlin. So, it might be different according to your file name and code inside shared module. Suppose, if your file name in shared module is Main.ios.kt and your UIViewController returning function name is ComposeEntryPoint() , then you have to call it like Main_iosKt.ComposeEntryPoint() . So it will differ according to your code. Now we will instantiate this ComposeViewControllerRepresentable from inside of our ContentView() code and we are good to go. // ContentView.swift struct ContentView: View { var body: some View { ComposeViewControllerRepresentable() .ignoresSafeArea(.all) } } As you can see in the code, you can use this Compose code anywhere inside SwiftUI and control it’s size as you want from within SwiftUI. The UI will look like as: ![Hello from Swift](/assets/blog/authors/ahsan_rasel/swiftui_compose_1.png =250x) If you want to integrate SwiftUI code inside compose, you have to wrap it with UIView , as you can't write SwiftUI code directly in Kotlin, you have to write it in Swift and pass it to a Kotlin function. To implement it, let's add an argument of type UIView to our ComposeEntryPoint() function. // MainViewController.kt fun ComposeEntryPoint(createUIView: () -> UIView): UIViewController { return ComposeUIViewController { Column( modifier = Modifier.fillMaxSize(), horizontalAlignment = Alignment.CenterHorizontally, verticalArrangement = Arrangement.Center ) { UIKitView( factory = createUIView, modifier = Modifier.fillMaxWidth().height(500.dp), ) } } } And pass createUIView to our Swift code as below: // ComposeViewControllerRepresentable.swift struct ComposeViewControllerRepresentable : UIViewControllerRepresentable { func updateUIViewController(_ uiViewController: UIViewControllerType, context: Context) {} func makeUIViewController(context: Context) -> some UIViewController { return MainViewControllerKt.ComposeEntryPoint(createUIView: { () -> UIView in UIView() }) } } Now, if you want to add other Views, create an parent wrapper UIView like below: // ComposeViewControllerRepresentable.swift private class SwiftUIInUIView<Content: View>: UIView { init(content: Content) { super.init(frame: CGRect()) let hostingController = UIHostingController(rootView: content) hostingController.view.translatesAutoresizingMaskIntoConstraints = false addSubview(hostingController.view) NSLayoutConstraint.activate([ hostingController.view.topAnchor.constraint(equalTo: topAnchor), hostingController.view.leadingAnchor.constraint(equalTo: leadingAnchor), hostingController.view.trailingAnchor.constraint(equalTo: trailingAnchor), hostingController.view.bottomAnchor.constraint(equalTo: bottomAnchor) ]) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } } Then add it into your ComposeViewControllerRepresentable and add Views according to your needs: // ComposeViewControllerRepresentable.swift func makeUIViewController(context: Context) -> some UIViewController { return MainViewControllerKt.ComposeEntryPoint(createUIView: { () -> UIView in SwiftUIInUIView(content: VStack { Text("Hello from SwiftUI") Image(systemName: "moon.stars") .resizable() .frame(width: 200, height: 200) }) }) } The output will look like this: ![Hello from Swift with Image](/assets/blog/authors/ahsan_rasel/swiftui_compose_2.png =250x) In this way, you can add as much SwiftUI code as you want into your shared Composable codes. And if you want to integrate UIKit code inside Compose, you don't have to write any intermediate code yourself. You can use UIKitView() composable function offered by Compose Multiplatform and add your UIKit code inside it directly: // MainViewController.kt UIKitView( modifier = Modifier.fillMaxWidth().height(350.dp), factory = { MKMapView() } ) This code integrates iOS native Map screen inside compose. Implementation of Gemini Chat app Now, let’s integrate our Compose code inside SwiftUI and proceed with the implementation of Gemini Chat app. We will implement a basic chat UI using LazyColumn of Jetpack Compose. As our main focus is integrating SwiftUI inside Compose Multiplatform, we are ignoring implementation details of other parts of the application like Compose part or data and logic part. We are using Ktor networking library to implement Gemini Pro API. To know more about Ktor implementation, visit Creating a cross-platform mobile application page. In this project, we are implementing our full UI with Compose Multiplatform. We will use SwiftUI just for input field of iOS app as TextField of Compose Multiplatform has some performance glitch in iOS side. Let’s put our Compose code inside ComposeEntryPoint() function. These codes contains Chat UI with TopAppBar and list of messages. This also has conditional implementation of input field which will be used for Android app. // MainViewController.kt fun ComposeEntryPoint(): UIViewController = ComposeUIViewController { Column( Modifier .fillMaxSize() .windowInsetsPadding(WindowInsets.systemBars), horizontalAlignment = Alignment.CenterHorizontally ) { ChatApp(displayTextField = false) } } We passed false to displayTextField so that Compose input field will not be active for our iOS version of the app. And the value of displayTextField will be true when we call dis ChatApp() composable function from Android implementation side as there is no performance issue of TextField in Android side (It’s native UI component for Android). Now come to our Swift code and implement a input field with SwiftUI: // TextInputView.swift struct TextInputView: View { @Binding var inputText: String @FocusState private var isFocused: Bool var body: some View { VStack { Spacer() HStack { TextField("Type message...", text: $inputText, axis: .vertical) .focused($isFocused) .lineLimit(3) if (!inputText.isEmpty) { Button { sendMessage(inputText) isFocused = false inputText = "" } label: { Image(systemName: "arrow.up.circle.fill") .tint(Color(red: 0.671, green: 0.365, blue: 0.792)) } } } .padding(15) .background(RoundedRectangle(cornerRadius: 200).fill(.white).opacity(0.95)) .padding(15) } } } And then return back to our ContentView structure and modify it like below: // ContentView.swift struct ContentView: View { @State private var inputText = "" var body: some View { ZStack { Color("TopGradient") .ignoresSafeArea() ComposeViewControllerRepresentable() TextInputView(inputText: $inputText) } .onTapGesture { // Hide keyboard on tap outside of TextField UIApplication.shared.sendAction(#selector(UIResponder.resignFirstResponder), to: nil, from: nil, for: nil) } } } Here, we added a ZStack and inside it we added our TopGradient color and also ignoresSafeArea() modifier so that our status bar color also matches rest of the our UI. Then we added our shared Compose code wrapper ComposeViewControllerRepresentable which implemented our main Chat UI. Then we also added our SwiftUI view named TextInputView() which will give smooth performance to the user in iOS app too with iOS native code. The final UI will look like this: Gemini Chat iOS Gemini Chat Android ![Gemini Chat iOS](/assets/blog/authors/ahsan_rasel/swiftui_compose_ios.png =300x) ![Gemini Chat Android](/assets/blog/authors/ahsan_rasel/swiftui_compose_android.png =300x) Here, the whole UI code of this ChatApp is shared between Android and iOS with Compose Multiplatform of KMP and only input field for iOS is integrated natively with SwiftUI. The complete source code for this project is available on GitHub as a public repository. GitHub Repository: SwiftUI in Compose Multiplatform of KMP Conclusion In this way, we can overcome our performance issues of Cross-Platform app with Kotlin Multiplatform and Compose Multiplatform while giving native-like feels and look to the user. We can also reduce development cost as we can share codes between platforms as much as we want. Compose Multiplatform also enables to share code with Desktop applications too. So single codebase can be used in mobile platforms as well as Desktop apps. Additionally, web support is in progress which will give you more opportunities to share codebase between platforms. Another big advantage of Kotlin Multiplatform (KMP) is, you can always opt out to your native development without wasting your code. You can use KMP code AS-IS in your Android application as it’s native for Android, and opt-out to develop iOS app separately. Also, reusing the same SwiftUI codes you have already implemented in KMP is possible. This framework not only gives you high-performance applications, but also the freedom to choose between percentages of code to share and to opt-out into native development, anytime you want. That's all for today. Stay tuned to updates on the KINTO Technologies Tech Blog for more exciting articles. Happy Coding!
アバター
Introduction Hello, this is Adegawa from KINTO Technologies New Vehicle Subscription Development Group. As an application engineer, I have been in charge of the development, maintenance, and the operation of KINTO ONE, a Toyota New Vehicle subscription service in Japan for three years. I love cars, so I am very happy that my work allows me to be part of it. At the moment, I am prioritizing my family, so I am driving a minivan, but I would like to ride a Toyota GR someday! My favorite engine mechanism is the cam gear train! Let me share today a couple of projects that I've found particularly fulfilling and accomplished. Preparing for the launch of the new Prius January 10 2023 marked the new Prius (5th generation) launch. Prius is a model that attracts attention even within Toyota, which has established itself as an eco-friendly vehicle at the forefront of hybrid vehicles. However, the Prius (predecessor) at the time the project was launched was not particularly popular among the KINTO ONE models, partly because it had already been released for a long time. So, personally, I did not think that the response would be particularly large even if the new model came out. But the goal was to withstand*, 990 PV/min! (This number is 10 times higher than the highest load ever.) At that time, we were only about 3% of our target. There are three main countermeasures. Countermeasure 1: adjusting the connection endpoint of the database RDB was configured with a writer instance and a reader instance. However, due to the access of the reference system being overwhelmingly large from the front, the load was biased on the reader instance, and the writer one had more room. We tentatively changed the setting to connect to the writer instance regardless of reference or update destination from the front. As a result, the load was balanced as intended. Since it was easier than adding a reader instance, I adopted this method first. Countermeasure 2: quotation screen improvement The next most visited screen after the top screen is the quotation screen for each car model. Here, we select car grade, package, option, etc., and we obtained the vehicle master information from the relational database table. This has been optimized to retrieve from the vehicle master information stored in the in-memory key/value cache (redis in Elasticache) to reduce DB load and improve front response. Next, the information that could not be cached was used to identify the high DB load. From quotation to application, I consult the master table containing dealer store information along with vehicle details. While no issues arise in terms of slow queries, there is a notable volume of transactions, resulting in cumulative effects. To address this, I introduced an index to the specific high-cost target table, leading to a significant improvement in query performance, with costs now reduced to single digits. Countermeasure 3: infrastructure enhancement In addition to the above measures, we made adjustments by reviewing the infrastructure structure so that we can withstand the target values of 1and 2. As a special temporary response, we increased the infrastructure as follows by manual scaling. Scale out AWS service Quantity - before change Quantity - after change EC2 Front End 6 36 EC2 Back End 2 15 Scale up AWS service Type - before Change Type - changed to EC2 Scale out only Elasticache t3.medium r6g.large RDS r5.xlarge r5.12xlarge Miscellaneous To proactively handle a sudden surge in access, it's crucial to pre-warm the ALB (Application Load Balancer). We submitted a request to AWS, providing the following information for the pre-warming operations: Expected peak hours The number of requests expected during the peak hour Traffic pattern Use cases etc. On the day of release As for the load, the actual access was about 10% at the peak compared to the allowable range that was prepared, but it was stable from the day of release without any trouble. The response from customers was good, and the influx (quotation and application) of the entire Prius, including the KINTO exclusive grade, seems to have been as expected. The part that had been tentatively responding after watching the transition for a while was calm was restored and finished. Finally It was a rare big event, so I think it was a very valuable experience. Whenever I saw a Prius in the city, I was curious and couldn’t help but follow them with my eyes. Could this be a KINTO one? Looking at the results, it seems that the specifications of the infrastructure were excessive, but I think it was good because it was impossible to accurately predict the inflow in advance and there was no room for mistakes. Despite the busy period from the end of the year to the beginning of the new year, I think that I was able to overcome it safely thanks to the cooperation of many colleagues. Service providers who managed the load test plan and schedule. A team member from the oncoming system who took care of the service outage due to server expansion. We received advice from Platform G team members regarding infrastructure expansion. I also want to express my gratitude here. Thank you very much.
アバター
こんにちは。 DBRE チーム所属の @p2sk です。 DBRE(Database Reliability Engineering)チームでは、横断組織としてデータベースに関する課題解決や、組織のアジリティとガバナンスのバランスを取るためのプラットフォーム開発などを行なっております。DBRE は比較的新しい概念で、DBRE という組織がある会社も少なく、あったとしても取り組んでいる内容や考え方が異なるような、発展途上の非常に面白い領域です。 弊社における DBRE チーム発足の背景やチームの役割については「 KTC における DBRE の必要性 」というテックブログをご覧ください。 本記事では、Aurora MySQL でロック競合(ブロッキング)起因のタイムアウトエラーが発生した際に根本原因を特定することができなかったので、原因を後追いするために必要な情報を定期的に収集する仕組みを構築した事例をご紹介します。尚、考え方自体は RDS for MySQL や、AWS 以外のクラウドサービスの MySQL の PaaS および、MySQL 単体にも適用できるものかと思いますので参考になりましたら幸いです。 背景 : ブロッキング起因のタイムアウトが発生 プロダクトの開発者から「アプリケーションのクエリタイムアウトが発生したため調査して欲しい」という問い合わせを受けました。エラーコードは SQL Error: 1205 とのことで、ロック獲得の待ち時間超過によるタイムアウトだと思われます。 弊社では、 Aurora MySQL のモニタリングに Performance Insights を使用しており、該当の時間帯のデータベース負荷を確認すると、確かに行ロック獲得待ちの時に発生する待ち事象「 synch/cond/innodb/row_lock_wait_cond 」が増加していました。 Performance Insights ダッシュボード : ロック待ち(オレンジ色)が増加している Performance Insights には「 トップ SQL 」というタブがあり、任意の時間帯における「DB 負荷の寄与率が高い順」で実行されていた SQL を表示してくれます。こちらも確認すると下図のように UPDATE SQL が表示されましたが、タイムアウトした SQL、つまりブロックされている側の SQL だけが表示されている状況でした。 トップ SQL タブ  : 表示された UPDATE ステートメントはブロックされている側のもの 「トップ SQL」は、例えば CPU 負荷が高いような状況下では、その寄与率が高い SQL を特定するのに大変便利な機能です。一方で、今回のようなブロッキングの根本原因を特定したい場合には有用ではないケースもあります。なぜなら、ブロッキングを起こしている根本原因の SQL(ブロッカー)自体は、それ単体ではデータベース負荷をかけていない場合もあるためです。 例えば、あるセッションで以下の SQL が実行されたとします。 -- クエリ A start transaction; update sample_table set c1 = 1 where pk_column = 1; このクエリは Primary Key を指定した単一行の更新クエリのため、非常に高速に終了します。しかし、トランザクションを開いたままになっており、この後に別のセッションで以下の SQL を実行すると、ロック獲得待ちとなりブロッキングが発生します。 -- クエリ B update sample_table set c2 = 2 クエリ B はブロッキングされ続けるため、待ち時間が長くなり「トップ SQL」に表示されます。一方でクエリ A は瞬時に実行が完了しており「トップ SQL」に表示されず、MySQL のスロークエリログにも記録されません。 この例は極端ですが、Performance Insights を用いたブロッカーの特定が難しいケースをご紹介しました。 一方で、Performance Insights でもブロッカーを特定できるケースはあります。例えば同一の UPDATE SQL が大量に実行されることで「Blocking Query = Blocked Query」となる状況です。この場合は Performance Insights で十分です。しかしブロッキングの発生原因は多様であり、現状の Performance Insights だと限界があります。 今回発生したインシデントも、Performance Insights ではブロッカーを特定できず、各種ログから原因の特定を試みました。Audit Log / Error Log / General Log / Slow Query Log の各種ログを確認しましたが、原因を特定できませんでした。 今回の調査を通して、現状だとブロッキングの原因特定を後から実施するための情報が不足していることが分かりました。しかし、今後同じような事象が発生しても「情報が不足しているため原因は分かりません」と回答するしかない状況は改善すべきです。したがってブロッキングの根本原因を特定するために「ソリューションの調査」を行うことにしました。 ソリューションの調査 この問題を解決するためにどうしたらいいか、下記について調査を行いました。 Amazon DevOps Guru for RDS 監視系 SaaS DB 系の OSS、DB モニタリングツール 以降、それぞれについて説明します。 Amazon DevOps Guru for RDS Amazon DevOps Guru は、監視対象の AWS リソースのメトリクス・ログを機械学習を使って解析し、パフォーマンスや運用上の問題を自動で検出し、問題の解決策に関する推奨事項を提案してくれるサービスです。 DevOps Guru for RDS は、DevOps Guru における DB の問題検出に特化した機能です。Performance Insights との違いとしては、DevOps Guru for RDS は自動で問題分析と解決案の提示まで行ってくれる点が挙げられます。「インシデント発生時の課題解決までマネージドに」という世界を実現したい AWS の思想が伝わってきます。 実際にブロッキングを発生させると、以下のような推奨事項が表示されました。 DevOps Guru for RDS の推奨事項 : 調査すべき待ち事象と SQL を提案 表示されている SQL はブロックされている側の SQL であり、ブロッカーの特定は難しそうでした。現状は「synch/cond/innodb/row_lock_wait」という待ち事象が DB 負荷に寄与している場合の調査方法を記した ドキュメントへのリンク を提示するにとどめているようです。したがって現状は提案された原因や推奨事項を、人間が最終的に判断する必要がありますが、将来的にはよりマネージドなインシデント対応の体験が提供されるようになるのではと感じています。 監視系 SaaS データベースのブロッキング原因を SQL レベルで調査できるソリューションとしては、 Datadog のデータベースモニタリング機能 があります。しかし、現時点では PostgreSQL と SQL Server にのみ対応しており、 MySQL には対応していません。また、 New Relic や Mackerel においても同様にブロッキングの事後調査を実施できる機能は提供されていないようでした。 DB 系の OSS、DB モニタリングツール 他にも、以下の DB 系の OSS、DB モニタリングツールについて調査しましたが、ソリューションは提供されていないようでした。 Percona Toolkit Percona Monitoring and Management MySQL Enterprise Monitor 一方、今回の調査で唯一 MySQL のブロッキング調査のソリューションを提供している可能性があるのが SQL Diagnostic Manager for MySQL でした。こちらは MySQL 用の DB モニタリングツールですが、今回の要件に対して機能がリッチすぎるのと、それに伴い価格面がネックとなり検証および導入を見送りました。 本調査を踏まえて、ほぼ既存のソリューションが存在しないことがわかり、自前で仕組みを作ることにしました。そのため、まずは「ブロッキング原因の手動での調査手順」について整理しました。尚、Aurora MySQL のバージョン 2(MySQL 5.7 系) が今年の 10 月 31 日に EOL を迎える予定であるため、対象は Aurora 3 系(MySQL 8.0 系)とします。また、対象のストレージエンジンは InnoDB とします。 ブロッキング原因の手動での調査手順 MySQL でブロッキングの情報を確認するためには、 下記 2 種類のテーブルを参照する必要があります。注意点として、MySQL のパラメータ performance_schema に 1 をセットして performance_schema を有効化しておく必要があります。 performance_schema.metadata_locks 獲得済みのメタデータロックに関する情報が格納される lock_status = 'PENDING' なレコードでブロックされているクエリを確認 performance_schema.data_lock_waits ストレージエンジンレベルでのブロッキング情報が格納される(行など) 例えば、メタデータ起因のブロッキングが発生している状況で performance_schema.data_lock_waits を SELECT してもレコードは取れません。そのため 2 種類のテーブルに格納されている情報を併用して調査を実施します。調査には、これらのテーブルと他のテーブルを組み合わせて分析しやすくした View が存在するため、そちらを使った方が便利です。以下に紹介します。 調査手順 1 : sys.schema_table_lock_waits を活用 sys.schema_table_lock_waits は、以下の 3 つのテーブルを使った SQL を ラップした View です。 performance_schema.metadata_locks performance_schema.threads performance_schema.events_statements_current リソースへのメタデータロック獲得で Wait が発生している時にこの View を SELECT すると、レコードが返ってきます。 例えば、以下のような状況です。 -- セッション 1 : lock tables でテーブルのメタデータロックを獲得し、保持し続ける lock tables sample_table write; -- セッション 2 : 互換性のない共有メタデータロックを獲得しようとして待たされる select * from sample_table; この状況で sys.schema_table_lock_waits を SELECT すると以下のようなレコードセットが取得できます。 この View の結果からは、直接ブロッカーとなっている SQL を特定することはできません。 waiting_query カラムでブロックされているクエリは特定できますが、 blocking_query カラムは無いため、 blocking_thread_id または blocking_pid を用いて SQL を特定します。 ブロッカーの特定方法 : SQL ベース SQL ベースでブロッカーを特定する場合、ブロッカーのスレッド ID を用います。 performance_schema.events_statements_current を用いた以下のクエリを実行することで、該当スレッドが最後に実行していた SQL テキストを取得できます。 SELECT THREAD_ID, SQL_TEXT FROM performance_schema.events_statements_current WHERE THREAD_ID = 55100\G 実行結果は、例えば以下のようになります。sample_table への lock tables を実行していたことがわかり、ブロッカーを特定できました。 この方法には欠点があります。ブロッカーがロック獲得後に追加で別のクエリを実行した場合、その SQL が取得されてしまうためブロッカーが特定できません。例えば、以下のような状況です。 -- セッション 1 : lock tables でテーブルのメタデータロックを獲得し、保持し続ける lock tables sample_table write; -- セッション 1 : lock tables 実行後、別のクエリを実行 select 1; この状態で、同様のクエリを実行すると以下の結果となります。 別の方法として performance_schema.events_statements_history を用いると、該当スレッドが過去に実行した直近 N 件の SQL テキストを取得できます。 SELECT THREAD_ID, SQL_TEXT FROM performance_schema.events_statements_history WHERE THREAD_ID = 55100 ORDER BY EVENT_ID\G; 結果は以下のようになります。 履歴が取得できるためブロッカーも特定できました。スレッドあたり何件 SQL 履歴を保持するかは performance_schema_events_statements_history_size パラメータで変更できます(検証時は 10 を設定)。サイズを大きくするほどブロッカーの特定確率は上がりますが、使用するメモリサイズも増えることと、どれだけサイズを大きくしても限界はあるため、バランスが重要となります。なお、履歴の取得が有効化されているかどうかは performance_schema.setup_consumers を SELECT することで確認できます。Aurora MySQL の場合 performance_schema.events_statements_history の取得はデフォルトで有効化されているようです。 ブロッカーの特定方法 : ログベース ログベースでブロッカーを特定する場合、General Log や Audit Log を用います。例えば Aurora MySQL で General Log の取得を有効化している場合、CloudWatch Logs Insights で以下のクエリを実行することで、該当プロセスが実行した SQL の履歴を全て取得できます。 fields @timestamp, @message | parse @message /(?<timestamp>[^\s]+)\s+(?<process_id>\d+)\s+(?<type>[^\s]+)\s+(?<query>.+)/ | filter process_id = 208450 | sort @timestamp asc このクエリを実行すると、以下のような結果を得ることができます。 CloudWatch Logs Insights のクエリ実行結果 : 赤枠で囲まれた SQL がブロッカー 弊社では基本的に General Log の取得を有効化しています。SQL ベースだとブロッカーが履歴テーブルから削除されて特定できない懸念があります。したがって今回はログベースでの特定方法を採用することにしました。 ブロッカー特定における留意事項 ブロッカーの特定には、最終的には人間の目視確認と判断が必要です。理由としては、ロックの獲得情報はあくまでスレッドと直接紐づいており、スレッドが実行している SQL は時事刻々と変化していくからです。したがって例示したような「ブロッカーがクエリを実行し終わっているがロックは獲得している」状況の場合、ブロッカーのプロセスが実行した SQL の履歴から、根本原因となる SQL を推測する必要があります。 とはいえ、ブロッカーとなるスレッド ID やプロセス ID が分かるだけでも、根本原因の特定率の大幅な向上が期待できます。 調査手順 2 : sys.innodb_lock_waits を活用 以下の 3 つのテーブルを使った SQL を ラップした View です。 performance_schema.data_lock_waits information_schema.INNODB_TRX performance_schema.data_locks ストレージエンジン(InnoDB)で実装されているロックの獲得で wait が発生している時にこの View を Select すると、レコードが返ってきます。 例えば、以下のような状況です。 -- セッション 1 : レコードを UPDATE したトランザクションを開きっぱなしにしておく start transaction; update sample_table set c2 = 10 where c1 = 1; -- セッション 2 : 同一レコードを更新しようとする delete from sample_table where c1 = 1; この状況で sys.innodb_lock_waits を SELECT すると以下のようなレコードセットが取得できます。 この結果からは sys.schema_table_lock_waits の時と同様に、直接ブロッカーを特定することはできません。したがって blocking_pid を用いて前述のログベースの方法でブロッカーを特定します。 fields @timestamp, @message | parse @message /(?<timestamp>[^\s]+)\s+(?<process_id>\d+)\s+(?<type>[^\s]+)\s+(?<query>.+)/ | filter process_id = 208450 | sort @timestamp asc このクエリを実行すると、以下のような結果を得ることができます。 CloudWatch Logs Insights のクエリ実行結果 : 赤枠で囲まれた SQL がブロッカー ここまでのまとめ Aurora MySQL のブロッキングの根本原因を事後調査できる状態にするための第一歩として、ブロッキングが発生している時の根本原因の調査方法を整理しました。調査手順は以下の通りです。 sys.schema_table_lock_waits と sys.innodb_lock_waits という 2 種類の View を用いてブロッカーのプロセス ID を特定 CloudWatch Logs Insights を使って General Log から、該当のプロセス ID の SQL 実行履歴を取得 目視で確認しながら、根本原因となっている SQL を特定(推定) 手順 1 は「ブロッキングが発生している状況」でないと結果が取得できません。したがって、N 秒間隔で定期的に 2 種類の View 相当の情報を収集して保存しておけば、事後調査ができることになります。なお、 N 秒 < アプリケーションタイムアウト時間 という関係性が成り立つような N を選定する必要があります。 ブロッキングについての補足 ブロッキングについて 2 点補足します。まずデッドロックとの違いについて、次にブロッキングツリーについて説明します。 デッドロックとの違い ブロッキングはデッドロックと混同されることが稀にありますので、その違いについて整理しておきます。デッドロックもブロッキングの一種ですが、デッドロックの場合は「いずれかのプロセスを強制的にロールバックさせない限り、事象が解消しないことが確定」しています。したがって InnoDB がデッドロックを検出すると比較的すぐに自動解消が行われます。一方で、通常のブロッキングの場合はブロッカーのクエリが終了すれば解消するため InnoDB による介入はありません。両者に比較を表にまとめると以下のようになります。 ブロッキング デッドロック InnoDB による自動解消 × ⚪︎ クエリの終了 KILL やタイムアウトエラーで途中で終了しない限り、ブロックしている側もされている側も最終的には実行完了する 片方のトランザクションは InnoDB によって強制的に終了させられる 一般的な解消方法 ブロッカーのクエリが実行完了することによる自然解消もしくは、アプリケーション側で設定したクエリタイムアウト時間経過後にタイムアウトエラー発生とともに解消 InnoDB がデッドロックを検知後、片方のトランザクションを強制的にロールバックさせることで解消 ブロッキングツリー MySQL の正式な用語ではありませんが、ブロッキングツリーについて説明します。これは「ブロッカーとなっているクエリも、別のブロッカーによってブロックされている」ような状況を指します。 例えば以下のような状況です。 -- セッション 1 begin; update sample_table set c1 = 2 where pk_column = 1; -- セッション 2 begin; update other_table set c1 = 3 where pk_column = 1; update sample_table set c1 = 4 where pk_column = 1; -- セッション 3 update other_table set c1 = 5 where pk_column = 1; この状況では、 sys.innodb_lock_waits を SELECT した時に、「セッション 1 が セッション 2 をブロックしている」という情報と「セッション 2 が セッション 3 をブロックしている」という情報の 2 レコードが取得できます。この場合、セッション 3 からみたブロッカーは セッション 2 ですが、問題の根本原因(Root Blocker)としてはセッション 1 です。このように、ブロッキングの発生は時にツリー状になることがあり、このようなケースではログベースでの調査がさらに困難になります。事前にブロッキングに関する情報を収集することの重要性は、このようなブロッキング関連の原因調査の難しさにあります。 以降では、ブロッキングの情報収集の仕組みの設計と実装についてご紹介します。 アーキテクチャ設計 弊社では、マルチリージョンかつ、リージョン内で複数の Aurora MySQL クラスタが稼働しています。したがって、リージョンおよびクラスタ横断でデプロイや運用の負荷を最小限に抑える構成にする必要がありました。 他にも、以下のような要件を整理しました。 機能要件 任意の SQL を Aurora MySQL に対して定期的に実行できる 任意のリージョンにある Aurora MySQL の情報を収集できる 実行対象の DB を管理できる クエリの実行結果を外部のストレージへ保管できる SQL ベースで格納したデータをクエリできる 収集元の DB への権限を持った人だけが収集したデータへアクセスできるように権限管理できる 非機能要件 収集対象の DB への負荷増(オーバーヘッド)は最小限に抑えられる 分析時のデータ鮮度は 5 分程度のタイムラグまでに抑える システムが稼働できない状況に陥った際は通知により気づける状態にある SQL ベースでの分析時のレスポンスが数秒で返ってくる 情報の収集先(なんらかのストレージ)は 1 箇所に集約できる 運用に必要な金銭的コストを最小限におさえる また、収集対象のテーブルについて以下の通り整理しました。 収集対象のテーブル sys.schema_table_lock_waits と sys.innodb_lock_waits の SELECT 結果を定期的に収集すればそれで OK なのですが、これらの View は複雑なため、元になっているテーブルを直接 SELECT するよりも負荷が高いです。したがって、非機能要件の「収集対象の DB への負荷増(オーバーヘッド)は最小限に抑えられる」を考慮し、View の元となっている以下の 6 テーブルを SELECT し、クエリエンジン側で View を構築し、クエリ負荷をクエリエンジン側にオフロードする構成にしました。 sys.schema_table_lock_waits の元テーブル performance_schema.metadata_locks performance_schema.threads performance_schema.events_statements_current sys.innodb_lock_waits の元テーブル performance_schema.data_lock_waits information_schema.INNODB_TRX performance_schema.data_locks 最も簡単な方法としては、MySQL のタスクスケジューリング機能である MySQL Event で、これらのテーブルに対する SELECT クエリを N 秒間隔で実行し、結果を専用のテーブルに保存する、というものが考えられます。しかしこの方法では 対象 DB に高頻度で書き込み負荷が発生したり、結果を確認するために該当の DB に個別にログインする必要があるなど、要件に対して適当ではない性質があります。したがって別の方法を検討しました。 アーキテクチャ案 最初に抽象的なアーキテクチャ図を以下の通り作成しました。 このアーキテクチャ図に対して、要件を踏まえつつ各レイヤーで使用する AWS サービスを選定していきました。 Collector のサービス選定 Collector の実装としては、弊社での利用実績を踏まえて以下のサービスを検討した結果、Lambda の使用をメインに設計を進めることにしました。 EC2 今回のワークロードは常時 EC2 を実行するほどの処理能力は必要ないと想定され、管理面・コスト面から過剰と判断 仕組みが EC2 へのデプロイや、EC2 上の実行環境に依存 ECS on EC2 今回のワークロードは常時 EC2 を実行するほどの処理能力は必要ないと想定され、管理面・コスト面から過剰と判断 ECR などの Container Repository に依存 ECS on Fargate Lambda 同様サーバーレスだが、ECR などの Container Repository には依存 Lambda 他のコンピュートレイヤのサービスよりも独立性が高く、今回想定する軽量な処理を実行するのに最適と判断 Storage / Query Interface のサービス選定 Storage / Query Interface は S3 + Athena 構成としました。理由は以下の通りです。 JOIN を含む SQL を実行したい Storage として CloudWatch Logs も検討したがこの要件により却下 高速な応答速度やトランザクション処理は求められない RDS / DynamoDB / Redshift などの DB サービスを使うメリットが無い Buffer のサービス選定 Collector と Storage の間のバッファレイヤとして Amazon Data Firehose を採用しました。他にも Kafka / SQS / Kinesis Data Streams 等検討しましたが、以下の理由で Firehose を採用しました。 Firehose に Put すれば自動で S3 にデータを保存してくれる(追加のコーディングが不要) 時間またはデータサイズによりバッファリングして一括で S3 に保存できるため S3 側のファイル数を少なくできる 自動で圧縮してくれるため S3 側のファイルサイズが抑えられる 動的パーティショニング 機能により、S3 のファイルパスを動的に決められる 上記で選定したサービスをベースに、アーキテクチャ案を 5 パターン作成しました。簡易的に、1 リージョンを対象に図示します。 案 1 : MySQL Event で Lambda を実行 Aurora MySQL は Lambda と統合 されています。これを利用して、MySQL Event を使って定期的に Lambda を Invoke するパターンです。アーキテクチャは以下の通りです。 ![案 1 : MySQL Event で Lambda を実行するパターンのアーキテクチャ図](/assets/blog/authors/m.hirose/2024-03-12-13-16-16.png =600x) 案 2 : Aurora から S3 に直接データを保存 Aurora MySQL は S3 とも統合 されており、直接 S3 にデータを保存できます。アーキテクチャは下図の通り非常にシンプルになります。一方で、案1 も同様ですが MySQL Event のデプロイが必要なため、Event の新規作成や修正時に複数の DB クラスタに対して横断的なデプロイが必要になります。手動で個別に対応するか、対象の全クラスタにデプロイする仕組みを用意する必要があります。 ![案 2 : Aurora から S3 に直接ファイルを保存するパターンのアーキテクチャ図](/assets/blog/authors/m.hirose/2024-03-12-13-15-50.png =300x) 案 3 : Step Functions 案 A パターン Step Functions と Lambda を組み合わせるパターンです。Map ステートを使うことで、対象のクラスタごとに Collector に相当する 子ワークフロー を並列で実行できます。「N 秒間隔で SQL を実行」という処理は Lambda と Wait ステートの組み合わせで実装します。この実装にするとステートの遷移数が非常に多くなります。Step Functions の標準ワークフローの料金体系ではステートの遷移数に対して課金が発生しますが、Express ワークフローは 1 回あたりの実行時間が最長 5 分という制約があるものの、ステートの遷移数は課金対象となりません。したがって、ステートの遷移数が多くなる箇所は Express ワークフローとして実装します。 こちらの AWS Blog を参考にしました。 案 4 : Step Functions 案 B パターン 案 3 と同様、Step Functions と Lambda を組み合わせるパターンです。案 3 と違う点として「N 秒間隔で SQL を実行」という処理は Lambda 内に実装し「SQL を実行 -> N 秒 Sleep」を 10 分間繰り返し続けます。Lambda の実行は最長で 15 分という制約があるため、10 分ごとに EventBridge で Step Functions を起動します。ステートの遷移数が非常に少ないため、Step Functions の金銭的コストを抑えることができます。一方で、Sleep している時間も Lambda が起動し続けるため、Lambda の課金額は 案 3 よりも高くなることが想定されます。 ![案 4 : Step Functions 案 B パターンのアーキテクチャ図](/assets/blog/authors/m.hirose/2024-03-12-13-23-40.png =600x) 案 5 : Sidecar パターン 弊社ではコンテナオーケストレーションサービスとして主に ECS を使用しており、各 Aurora MySQL にアクセス可能な ECS クラスタが最低 1 つは存在するという前提を利用した案です。タスク内に新規実装した Collector を Sidecar として配置することで、Lambda などの追加のコンピューティングリソースコストが発生しないメリットがあります。ただし、Fargate のリソース内に収まらない場合は拡張が必要になります。 ![案 5 : Sidecar パターンのアーキテクチャ図](/assets/blog/authors/m.hirose/2024-03-12-13-47-37.png =600x) アーキテクチャ比較 各案を比較した結果を以下の通り表にまとめました。 案 1 案 2 案 3 案 4 案 5 開発・運用者 DBRE DBRE DBRE DBRE コンテナ周りは管轄外のため他チームに依頼が必要 金銭的コスト ◎ ◎ ⚪︎ △ ◎ 実装コスト △ ⚪︎ △ ⚪︎ ⚪︎ 開発のアジリティ ⚪︎(DBRE) ⚪︎(DBRE) ⚪︎(DBRE) ⚪︎(DBRE) △(チーム間で要調整) デプロイ容易性 △(Event のデプロイが手動または専用の仕組み作りが必要) △(Event のデプロイが手動または専用の仕組み作りが必要) ⚪︎(既存の開発フローで IaC 管理可能) ⚪︎(既存の開発フローで IaC 管理可能) △(チーム間で要調整) スケーラビリティ ⚪︎ ⚪︎ ⚪︎ ⚪︎ △(Fargate 管轄のチームと要調整) 固有の考慮事項 Aurora から Lambda を起動するために IAM や DB ユーザーへの権限設定が必要 バッファリングしないため、S3 への書き込みが同期かつ API 実行回数が多い Express ワークフローが At least once モデルであることを考慮した実装が必要 Lambda の実行時間が必要以上に長くなるため、最も金銭的コストが高い Sidecar コンテナがタスク数と同じ数できるので、処理が重複して実行される 以上の比較を踏まえて、案 3(標準ワークフローと Express ワークフローを併用した Step Functions を用いる案)を採用しました。理由は以下の通りです。 収集データの種類を拡充していく見込みがあり、自チーム(DBRE)で開発・運用をコントロールできる方がスピード感を持って対応できる MySQL Event を使用する案はシンプルな構成だが、横断的な IAM 権限の修正や DB ユーザーへの権限追加など気にすべき点が多く、自動化するにしても手動でカバーするにしても人的コストが高い 多少実装にコストがかかってもそれ以外でメリットを享受できる案 3 が最もバランスが取れていると判断 以降では、採用した案を実装する過程で工夫した点や最終的なアーキテクチャをご紹介します。 実装 DBRE チームではモノレポで開発を行っており、管理ツールとして Nx を採用しています。インフラストラクチャ管理は Terraform で行い、Lambda の実装は Go 言語で行っています。Nx を用いた DBRE チームの開発フローについては、弊社テックブログ「 AWSサーバレスアーキテクチャをMonorepoツール - Nxとterraformで構築してみた! 」をご覧ください。 最終的なアーキテクチャ図 マルチリージョン対応やその他の点を考慮し、最終的なアーキテクチャは下図のようになりました。 主な考慮事項は以下の通りです。 Express ワークフローは 5 分で強制終了でエラー扱いとなるため、4 分間経過後に終了 DynamoDB へのアクセス回数は少なくレイテンシもネックにならないため東京リージョンに集約 Firehose へ Put した後の S3 へのデータ同期は非同期であるためレイテンシがネックにならず、S3 を東京リージョンに集約 Secrets Manager への高頻度アクセスによる金銭的コストを抑えるために、シークレットの取得はステートのループ外で実施 各 Express ワークフローが複数回実行されるのを防ぐためのロック機構を DynamoDB を使って実装 ※ Express ワークフローは At least once 実行モデルであるため 以降では、実装時に工夫した点をいくつかご紹介します。 各 DB に専用の DB ユーザーを作成 今回対象の SQL を実行するのに必要な権限は以下の 2 つだけです。 GRANT SELECT ON performance_schema.* TO ${user_name}; -- information_schema.INNODB_TRX の SELECT に必要 GRANT PROCESS ON *.* TO ${user_name}; この権限だけを持つ DB ユーザーが全ての Aurora MySQL に対して作成される仕組みを作りました。弊社では、 全ての Aurora MySQL に対して日次で接続し、様々な情報を収集するバッチ処理 が存在します。このバッチ処理を修正し、全 DB に対して必要な権限を持った DB ユーザーを作成するようにしました。これにより、新しい DB が作成されても自動で必要な DB ユーザーが存在する状態になりました。 DB 負荷および S3 に保存されるデータサイズの低減 収集対象の 6 テーブルのうちいくつかは、ブロッキングが発生していなくてもレコードが取得できます。そのため N 秒間隔で毎回全件 SELECT すると、わずかではありますが Aurora への負荷が無駄に増え、S3 にもデータが無駄に溜まってしまいます。これを避けるために、ブロッキングが発生している時だけ関連するテーブルを全件 SELECT するような実装にしました。ブロッキング検知用の SQL も負荷を最小限に抑えることを意識して、以下の通り整理しました。 メタデータのブロッキング発生検知 メタデータのブロッキング発生検知クエリは以下の通りです。 select * from `performance_schema`.`metadata_locks` where lock_status = 'PENDING' limit 1 このクエリでレコードが取得できた時だけ、以下の 3 テーブルを全件 SELECT して Firehose へ Put します。 performance_schema.metadata_locks performance_schema.threads performance_schema.events_statements_current InnoDB のブロッキング発生検知 InnoDB のブロッキング発生検知クエリは以下の通りです。 select * from `information_schema`.`INNODB_TRX` where timestampdiff(second, `TRX_WAIT_STARTED`, now()) >= 1 limit 1; このクエリでレコードが取得できた時だけ、以下の 3 テーブルを全件 SELECT して Firehose へ Put します。 performance_schema.data_lock_waits information_schema.INNODB_TRX performance_schema.data_locks goroutine を用いたクエリの並行処理 各テーブルへの SELECT 実行タイミングに多少のズレがあっても、ブロッキングが継続していれば後で JOIN したときにデータ不整合が発生する確率は低いといえます。しかし、できるだけ同じタイミングで実施した方が望ましくはあります。また「N 秒間隔でデータが収集され続ける」という状態を達成するためには、Collector の Lambda の実行時間を可能な限り短く終了させる必要もあります。以上 2 点を踏まえて、クエリ実行は goroutine を用いて可能な限り並行処理しています。 想定外の過負荷を避けるためのセッション変数の使用 事前に実行予定のクエリ負荷が十分に低いことは確認していますが、時には「想定以上に実行時間が長くなる」「情報収集のクエリがブロッキングに巻き込まれる」といった状況も考えられます。したがって、できる限り安全に情報を取得し続けるために max_execution_time と TRANSACTION ISOLATION LEVEL READ UNCOMMITTED をセッションレベルで設定しています。 Go 言語でこの処理を実装するために、 database/sql/driver パッケージの driver.Connector インタフェースの関数 Connect() をオーバーライドしています。エラー処理を除いた実装イメージとしては以下のとおりです。 type sessionCustomConnector struct { driver.Connector } func (c *sessionCustomConnector) Connect(ctx context.Context) (driver.Conn, error) { conn, err := c.Connector.Connect(ctx) execer, _ := conn.(driver.ExecerContext) sessionContexts := []string{ "SET SESSION max_execution_time = 1000", "SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED", } for _, sessionContext := range sessionContexts { execer.ExecContext(ctx, sessionContext, nil) } return conn, nil } func main() { cfg, _ := mysql.ParseDSN("dsn_string") defaultConnector, _ := mysql.NewConnector(cfg) db := sql.OpenDB(&sessionCustomConnector{defaultConnector}) rows, _ := db.Query("SELECT * FROM performance_schema.threads") ... } Express ワークフローのためのロック機構 StepFunctions の Express ワークフローは At least once 実行モデルのため、ワークフロー全体が複数回実行される可能性があります。今回のケースだと重複実行されても大きな問題ではないのですが、Exactly once の方が望ましくはあるため、 AWS Blog を参考に DynamoDB を用いた簡易的なロック機構を実装しました。 具体的には、Express ワークフロー開始時に実行される Lambda で、DynamoDB のテーブルに attribute_not_exists 条件式 つきでデータを PUT します。パーティションキーには、親ワークフローで生成した一意な ID を指定するため「PUT が成功する = 自分自身が初回実行者」である、と判断できます。失敗した場合は既に別の子ワークフローが稼働中と判断し、以降の処理をスキップして終了します。 Amazon Data Firehose の動的パーティショニングの活用 Firehose の 動的パーティショニング 機能を使って、S3 のファイルパスを動的に決定しています。動的パーティショニングのためのルール(S3 バケットプレフィクス)は、後述の Athena でのアクセス制御も考慮して以下のように設定しました。 !{partitionKeyFromQuery:db_schema_name}/!{partitionKeyFromQuery:table_name}/!{partitionKeyFromQuery:env_name}/!{partitionKeyFromQuery:service_name}/day=!{timestamp:dd}/hour=!{timestamp:HH}/ この設定を入れた Firehose の Stream に json データを Put すると、json 内の属性からパーティションキーとなる属性を探して、自動でルール通りのファイルパスで S3 へ保存してくれます。 例えば、以下の json データを Firehose へ Put したとします。 { "db_schema_name":"performance_schema", "table_name":"threads", "env_name":"dev", "service_name":"some-service", "other_attr1":"hoge", "other_attr2":"fuga", ... } その結果 S3 へ保存されるファイルパスは以下のようになります。Firehose へ Put する際にファイルパスを指定する必要は一切なく、事前に定義しておいたルールに基づいて Firehose が自動でファイル名を決定して保存してくれています。 Firehose が S3 に保存したファイル : 動的パーティショニングでファイルパスは自動で決定 スキーマ名やテーブル名などは MySQL テーブルの SELECT 結果には存在しないため、Firehose に Put する json を生成するタイミングで共通のカラムとして追加するように実装しています。 Athena のテーブルとアクセス権限の設計 MySQL 側のテーブル定義をもとに、Athena 側でテーブルを作成する例を 1 テーブル分ご紹介します。 performance_schema.metadata_locks の MySQL 側の CREATE 文は以下の通りです。 CREATE TABLE `metadata_locks` ( `OBJECT_TYPE` varchar(64) NOT NULL, `OBJECT_SCHEMA` varchar(64) DEFAULT NULL, `OBJECT_NAME` varchar(64) DEFAULT NULL, `COLUMN_NAME` varchar(64) DEFAULT NULL, `OBJECT_INSTANCE_BEGIN` bigint unsigned NOT NULL, `LOCK_TYPE` varchar(32) NOT NULL, `LOCK_DURATION` varchar(32) NOT NULL, `LOCK_STATUS` varchar(32) NOT NULL, `SOURCE` varchar(64) DEFAULT NULL, `OWNER_THREAD_ID` bigint unsigned DEFAULT NULL, `OWNER_EVENT_ID` bigint unsigned DEFAULT NULL, PRIMARY KEY (`OBJECT_INSTANCE_BEGIN`), KEY `OBJECT_TYPE` (`OBJECT_TYPE`,`OBJECT_SCHEMA`,`OBJECT_NAME`,`COLUMN_NAME`), KEY `OWNER_THREAD_ID` (`OWNER_THREAD_ID`,`OWNER_EVENT_ID`) ) これを Athena 用として以下のように定義しました。 CREATE EXTERNAL TABLE `metadata_locks` ( `OBJECT_TYPE` string, `OBJECT_SCHEMA` string, `OBJECT_NAME` string, `COLUMN_NAME` string, `OBJECT_INSTANCE_BEGIN` bigint, `LOCK_TYPE` string, `LOCK_DURATION` string, `LOCK_STATUS` string, `SOURCE` string, `OWNER_THREAD_ID` bigint, `OWNER_EVENT_ID` bigint, `db_schema_name` string, `table_name` string, `aurora_cluster_timezone` string, `stats_collected_at_utc` timestamp ) PARTITIONED BY ( env_name string, service_name string, day int, hour int ) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe' LOCATION 's3://<bucket_name>/performance_schema/metadata_locks/' TBLPROPERTIES ( "projection.enabled" = "true", "projection.day.type" = "integer", "projection.day.range" = "01,31", "projection.day.digits" = "2", "projection.hour.type" = "integer", "projection.hour.range" = "0,23", "projection.hour.digits" = "2", "projection.env_name.type" = "injected", "projection.service_name.type" = "injected", "storage.location.template" = "s3://<bucket_name>/performance_schema/metadata_locks/${env_name}/${service_name}/day=${day}/hour=${hour}" ); ポイントはパーティションキーの設計です。これにより「元の DB へのアクセス権限を持った人だけがデータにアクセスできる」状態にしています。弊社では、各サービスに固有の service_name と、環境ごとに固有の env_name という 2 つのタグを全ての AWS リソースに付与しており、このタグをアクセス制御手段の 1 つとして活用しています。この 2 つのタグを S3 に保存するファイルパスの一部に含め、各サービスに共通で付与している IAM Policy に対してポリシー変数を用いて Resource を記述することで、同じテーブルであってもアクセス権限を持っている S3 のファイルパスに相当するパーティションのデータしか SELECT できない、という状態にしています。 各サービスに共通で付与している IAM Policy に付与する権限のイメージは以下の通りです。 { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::<bukect_name>/*/${aws:PrincipalTag/env_name}/${aws:PrincipalTag/service_name}/*" ] } また、今回はパーティションをメンテナンスフリーにしたかったので、 パーティション射影 を使っています。パーティション射影を使う場合、基本的にはパーティションキーの取りうる値の範囲が既知である必要がありますが、 injected 射影型 を使うことで値の範囲を Athena に伝える必要なく、メンテナンスフリーな動的パーティショニングを実現しています。 Athena 側での View の再現 ブロッキングの事後調査に必要な 6 つのテーブルを Athena に作成し、MySQL 側と同様に View でラップした方法をご紹介します。View の定義は、MySQL 側の View 定義をもとに、前述の共通カラムを付与したり、パーティションキーの比較を JOIN 句に追加するなどの修正を加えました。 sys.innodb_lock_waits の Athena 側での定義は以下の通りです。 CREATE OR REPLACE VIEW innodb_lock_waits AS select DATE_ADD('hour', 9, w.stats_collected_at_utc) as stats_collected_at_jst, w.stats_collected_at_utc as stats_collected_at_utc, w.aurora_cluster_timezone as aurora_cluster_timezone, r.trx_wait_started AS wait_started, date_diff('second', r.trx_wait_started, r.stats_collected_at_utc) AS wait_age_secs, rl.OBJECT_SCHEMA AS locked_table_schema, rl.OBJECT_NAME AS locked_table_name, rl.PARTITION_NAME AS locked_table_partition, rl.SUBPARTITION_NAME AS locked_table_subpartition, rl.INDEX_NAME AS locked_index, rl.LOCK_TYPE AS locked_type, r.trx_id AS waiting_trx_id, r.trx_started AS waiting_trx_started, date_diff('second', r.trx_started, r.stats_collected_at_utc) AS waiting_trx_age_secs, r.trx_rows_locked AS waiting_trx_rows_locked, r.trx_rows_modified AS waiting_trx_rows_modified, r.trx_mysql_thread_id AS waiting_pid, r.trx_query AS waiting_query, rl.ENGINE_LOCK_ID AS waiting_lock_id, rl.LOCK_MODE AS waiting_lock_mode, b.trx_id AS blocking_trx_id, b.trx_mysql_thread_id AS blocking_pid, b.trx_query AS blocking_query, bl.ENGINE_LOCK_ID AS blocking_lock_id, bl.LOCK_MODE AS blocking_lock_mode, b.trx_started AS blocking_trx_started, date_diff('second', b.trx_started, b.stats_collected_at_utc) AS blocking_trx_age_secs, b.trx_rows_locked AS blocking_trx_rows_locked, b.trx_rows_modified AS blocking_trx_rows_modified, concat('KILL QUERY ', cast(b.trx_mysql_thread_id as varchar)) AS sql_kill_blocking_query, concat('KILL ', cast(b.trx_mysql_thread_id as varchar)) AS sql_kill_blocking_connection, w.env_name as env_name, w.service_name as service_name, w.day as day, w.hour as hour from ( ( ( ( data_lock_waits w join INNODB_TRX b on( ( b.trx_id = cast( w.BLOCKING_ENGINE_TRANSACTION_ID as bigint ) ) and w.stats_collected_at_utc = b.stats_collected_at_utc and w.day = b.day and w.hour = b.hour and w.env_name = b.env_name and w.service_name = b.service_name ) ) join INNODB_TRX r on( ( r.trx_id = cast( w.REQUESTING_ENGINE_TRANSACTION_ID as bigint ) ) and w.stats_collected_at_utc = r.stats_collected_at_utc and w.day = r.day and w.hour = r.hour and w.env_name = r.env_name and w.service_name = r.service_name ) ) join data_locks bl on( bl.ENGINE_LOCK_ID = w.BLOCKING_ENGINE_LOCK_ID and bl.stats_collected_at_utc = w.stats_collected_at_utc and bl.day = w.day and bl.hour = w.hour and bl.env_name = w.env_name and bl.service_name = w.service_name ) ) join data_locks rl on( rl.ENGINE_LOCK_ID = w.REQUESTING_ENGINE_LOCK_ID and rl.stats_collected_at_utc = w.stats_collected_at_utc and rl.day = w.day and rl.hour = w.hour and rl.env_name = w.env_name and rl.service_name = w.service_name ) ) また、sys.schema_table_lock_waits の Athena 側での定義は以下の通りです。 CREATE OR REPLACE VIEW schema_table_lock_waits AS select DATE_ADD('hour', 9, g.stats_collected_at_utc) as stats_collected_at_jst, g.stats_collected_at_utc AS stats_collected_at_utc, g.aurora_cluster_timezone as aurora_cluster_timezone, g.OBJECT_SCHEMA AS object_schema, g.OBJECT_NAME AS object_name, pt.THREAD_ID AS waiting_thread_id, pt.PROCESSLIST_ID AS waiting_pid, -- sys.ps_thread_account(p.OWNER_THREAD_ID) AS waiting_account, -- MySQL 側での情報収集時に select に含めておく必要があるが、不要なため未対応 p.LOCK_TYPE AS waiting_lock_type, p.LOCK_DURATION AS waiting_lock_duration, pt.PROCESSLIST_INFO AS waiting_query, pt.PROCESSLIST_TIME AS waiting_query_secs, ps.ROWS_AFFECTED AS waiting_query_rows_affected, ps.ROWS_EXAMINED AS waiting_query_rows_examined, gt.THREAD_ID AS blocking_thread_id, gt.PROCESSLIST_ID AS blocking_pid, -- sys.ps_thread_account(g.OWNER_THREAD_ID) AS blocking_account, -- MySQL 側での情報収集時に select に含めておく必要があるが、不要なため未対応 g.LOCK_TYPE AS blocking_lock_type, g.LOCK_DURATION AS blocking_lock_duration, concat('KILL QUERY ', cast(gt.PROCESSLIST_ID as varchar)) AS sql_kill_blocking_query, concat('KILL ', cast(gt.PROCESSLIST_ID as varchar)) AS sql_kill_blocking_connection, g.env_name as env_name, g.service_name as service_name, g.day as day, g.hour as hour from ( ( ( ( ( metadata_locks g join metadata_locks p on( ( (g.OBJECT_TYPE = p.OBJECT_TYPE) and (g.OBJECT_SCHEMA = p.OBJECT_SCHEMA) and (g.OBJECT_NAME = p.OBJECT_NAME) and (g.LOCK_STATUS = 'GRANTED') and (p.LOCK_STATUS = 'PENDING') AND (g.stats_collected_at_utc = p.stats_collected_at_utc and g.day = p.day and g.hour = p.hour and g.env_name = p.env_name and g.service_name = p.service_name) ) ) ) join threads gt on(g.OWNER_THREAD_ID = gt.THREAD_ID and g.stats_collected_at_utc = gt.stats_collected_at_utc and g.day = gt.day and g.hour = gt.hour and g.env_name = gt.env_name and g.service_name = gt.service_name) ) join threads pt on(p.OWNER_THREAD_ID = pt.THREAD_ID and p.stats_collected_at_utc = pt.stats_collected_at_utc and p.day = pt.day and p.hour = pt.hour and p.env_name = pt.env_name and p.service_name = pt.service_name) ) left join events_statements_current gs on(g.OWNER_THREAD_ID = gs.THREAD_ID and g.stats_collected_at_utc = gs.stats_collected_at_utc and g.day = gs.day and g.hour = gs.hour and g.env_name = gs.env_name and g.service_name = gs.service_name) ) left join events_statements_current ps on(p.OWNER_THREAD_ID = ps.THREAD_ID and p.stats_collected_at_utc = ps.stats_collected_at_utc and p.day = ps.day and p.hour = ps.hour and p.env_name = ps.env_name and p.service_name = ps.service_name) ) where (g.OBJECT_TYPE = 'TABLE') 結果 構築した仕組みを使って実際にブロッキングを発生させ、Athena 側で調査を行なってみます。 select * from innodb_lock_waits where stats_collected_at_jst between timestamp '2024-03-01 15:00:00' and timestamp '2024-03-01 16:00:00' and env_name = 'dev' and service_name = 'some-service' and hour between cast(date_format(DATE_ADD('hour', -9, timestamp '2024-03-01 15:00:00'), '%H') as integer) and cast(date_format(DATE_ADD('hour', -9, timestamp '2024-03-01 16:00:00'), '%H') as integer) and day = 1 order by stats_collected_at_jst asc limit 100 ブロッキングが発生していた時間帯を指定して Athena で上記クエリを実行すると、以下のような結果が返ってきます。 ブロッカーの SQL は分からないため、プロセス ID(blocking_pid カラム)をもとに CloudWatch Logs Insights を使ってブロッカーが実行した SQL の履歴を確認します。 fields @timestamp, @message | parse @message /(?<timestamp>[^\s]+)\s+(?<process_id>\d+)\s+(?<type>[^\s]+)\s+(?<query>.+)/ | filter process_id = 215734 | sort @timestamp desc 以下のような結果が得られ、ブロッカーの SQL が update d1.t1 set c1 = 12345 であることを特定できました。 同様の手順で schema_table_lock_waits でもメタデータ関連のブロッキング状況を確認することができるようになりました。 今後の展望 今後の展望としては、以下のようなことを考えています。 プロダクトへの展開はこれからなので、実運用を通してブロッキング起因のインシデントに関する知見をためる Lambda の 課金時間(Billed Duration) を最小化するためのボトルネック調査とチューニング performance_schema と information_schema における収集対象の拡充 インデックス使用状況の分析など調査の幅を広げる インシデント対応からのフィードバックで収集情報を拡充するサイクルを回すことで、DB レイヤーの課題解決力を向上 Amazon QuickSight などの BI サービスによる可視化 performance_schema 等に精通していないメンバーでも原因調査できる世界にする まとめ 本記事では、Aurora MySQL で発生したロック競合(ブロッキング)起因のタイムアウトエラーの原因調査をきっかけに、ブロッキング原因を後追いするために必要な情報を定期的に収集する仕組みを構築した事例をご紹介しました。MySQL でブロッキングの情報を後追いするためにはメタデータロックと InnoDB のロックという 2 種類のロックに関する待ち情報を以下の 6 テーブルを使って定期的に収集する必要があります。 メタデータロック performance_schema.metadata_locks performance_schema.threads performance_schema.events_statements_current InnoDB のロック performance_schema.data_lock_waits information_schema.INNODB_TRX performance_schema.data_locks 弊社の環境を踏まえてマルチリージョンかつ、複数の DB クラスタに対して情報収集を実施できるアーキテクチャを設計し、実装しました。その結果、ブロッキング発生から最長 5 分のタイムラグで SQL ベースで事後調査が実施でき、結果が数秒で返ってくるような仕組みを構築することができました。いずれ SaaS や AWS サービスへも組み込まれる機能かもしれませんが、DBRE チームでは必要と判断すれば積極的に自分たちで機能を実装することを大切にしています。 KINTO テクノロジーズ DBRE チームでは一緒に働いてくれる仲間を絶賛募集中です!カジュアルな面談も歓迎ですので、 少しでも興味を持っていただけた方はお気軽に X の DM 等でご連絡ください。併せて、 弊社の採用 Twitter もよろしければフォローお願いします! Appendix : 参考記事 MySQL 8.0 系におけるブロッキングの調査方法は、以下の記事にとても分かりやすくまとまっており参考にさせていただきました。 InnoDBの行ロック状態を確認する[その1] InnoDBの行ロック状態を確認する[その2] 磯野ー、MySQLのロック競合を表示しようぜー また、ブロッカーが分かった後は「どんなロックが起因でブロッキングが起きていたか」といった深掘りも必要ですが、MySQL のロックについては以下の記事にとてもわかりやすくまとまっております。 MySQLのロックについて MySQL のリファレンスも参考にさせていただきました。 The metadata_locks Table The data_locks Table InnoDB INFORMATION_SCHEMA Transaction and Locking Information
アバター
Interviewee The Global Development Division within KINTO Technologies has a DevOps team with multinational and diverse backgrounds. Although each member has different strengths in different languages, technologies, and experiences, they work very smoothly as a team. This time, we interviewed a team leader, Li-san ( the author of the article "Introduction of Flyway" ). If you are interested in managing an international team, please read on. Self- and Team-Introduction Self-introduction I'm Li, a DevOps team leader in the Global Development Division at KINTO Technologies. After graduating, I spent five years working at an IT subsidiary of a Japanese manufacturer. During this period, Japanese was the common language used in the company, prompting me to learn Japanese, the Japanese culture, and enhance my communication skills across various cultures. At that time, I started out developing, designing, and evaluating web applications, and gained experience as a systems engineer and project manager. I gained not only development experience, but also knowledge of the development process and management. I then served as a lecturer at a university for three years, learning new technologies and pedagogy. This experience also deepened my understanding of effective learning methods. After moving to Japan, I returned to the IT field when my second daughter started preschool. With my cross-cultural communication skills, development experience, management skills and human resource development experience, I currently lead a DevOps team in the Global Development Division. Team-introduction At present, our DevOps team consists of three subteams: CI/CD, Infrastructure, and Test. The six members of the team are as follows, with different nationalities and areas of expertise. From the beginning, the DevOps team's philosophy has been to support application development teams by intentionally combining members with different skill sets. No. Subteam Previous work experience Nationality English proficiency Japanese proficiency 1 CI/CD Development, project management, quality assurance, and system design China B B 2 Infra Infrastructure engineer, project promotion, integration, and design China C A 3 Infra Infrastructure engineer, development, and pre-sales China C B 4 Test Network engineer and project management New Zealand A C 5 Test Development engineer India A C 6 Test Testing, development, and design Myanmar B C *Language proficiency: A: Can reconstruct and express subtle nuances B: Can discuss a wide range of complex topics C: Can express personal thoughts and reasoning D: Can engage in everyday conversation Q&A Corner It can be hard to imagine the daily work of a team with diverse language, cultural, and technical backgrounds. So this time we asked a few questions to the team leader, Li-san. We would like to summarize them here in a Q&A format to provide you with some insight into the team. Q&A Part 1 Q. How do you keep everyone on your DevOps team working toward a common goal, given the variety of professional backgrounds within the team? A. We share a clear roadmap within our team as shown below. Based on this roadmap, we set specific goals so that all members are aligned and work towards the same direction. The team comprises members with a variety of aspirations; some are interested in quality assurance, others aspire to be in charge of infrastructure projects, while some want to be involved in development. We understand each of our strengths and aspirations, and work collaboratively to create a roadmap that everyone can agree with. Roadmap (example) Q&A Part 2 Q. Seven people in your team speak four different native languages. What language do you use to communicate? What about documentation or messages? A. When communicating verbally and writing documents, they use their preferred language (English or Japanese). Messages are mainly sent in English because everyone on the team can understand English. We also encourage the use of translation tools for better understanding each other. To communicate across languages and cultures, the presenter in our team will speak slowly, use simple words, and create an atmosphere in which questions are immediately asked if they are not understood. Q&A Part 3 Q. How do you promote knowledge sharing in your culturally and technically diverse team? A. Our team values knowledge sharing, and we use three methods: Period Means Content Purpose Ice breaking phase Knowledge sharing meeting for each member's field of expertise Previous knowledge To understand each other and expand their knowledge One year after assignment Online course learning (e.g. automated testing tools and microservices architecture courses on Udemy) To enhance the skills lacking for the current job (e.g. system architecture, automated testing tools, or AWS)Improvement of skills needed in current job At any time (after different tasks are completed) Document and retain Procedures, know-how, etc. ( e.g. Automated test blocks ) To facilitate horizontal development in the future Q&A Part 4 Q. As a mother, is it difficult to lead a cross-cultural team? A. My team is incredibly supportive and understanding of me being a mother. As a culture of the Global Development Division, we respect diversity, respect each other, and maintain an appropriate distance. I have two elementary school children, and when I need to participate in school events, my team members support me by adjusting schedules and reallocating tasks. For example, when my entire family recently contracted the flu, I couldn't come to work for about two weeks, but my team members were willing to take on duties to alleviate my workload. I am very grateful for their support. Q&A Part 5 Q. When managing a multinational team, how can you unify the various standards (e.g. work ethics, professional standards, etc.) and keep members motivated while achieving a certain level of results? A. It is important to recognize and respect the rationality of different cultures. As I am from a foreign country myself, I understand the differences between Japanese and foreign practices and can explain them to the team from this perspective. Since this is Japan, I basically adhere to Japanese standards, but I offer commentary from a foreigner's perspective. I also leverage knowledge of management studies and PMP (Project Management Professional) as well as adopt other common industry practices. I prioritize providing team members with significant autonomy, fostering motivation by enabling them to work on projects of their choosing. Lastly Q. What do you value the most or devise when managing such a global team? A. I still think communication is the most important, in line with the philosophy of PMP. In our team, everyone is committed to communicating their ideas clearly and with precision. We always use the 5W1H framework to communicate in our work. We also respect different ways of thinking. Basically, we communicate with each other on the basis of the understanding that people are different. Q. I think the support you receive from your team members stems from trust. What's your secret? A. In my day-to-day work, I make a conscious effort to support and mentor our members. We work together to help members achieve their goals. As a team, each of us is an indispensable member. Because we work together to achieve our goals, I believe we can support each other regardless of our positions as leaders or members. I think sincerity is essential. Summary and Future Prospects In this article, we interviewed Li-san, the DevOps team leader of Global Development Division, about cross-cultural communication and knowledge sharing in a multinational team, balancing motherhood with leadership responsibilities, and managing a multinational team. As she mentioned at the end, even in teams with members from diverse backgrounds, always be aware and respect that "people are different," and communicate ideas clearly through the 5W1H framework. We have found that the accumulation of these practices builds trust and enables us to efficiently support each other toward common goals. This is certainly not only true for multinational teams, but also for teams of the same nationality. We hope that this article will be of some help to those who are struggling to manage teams with diverse backgrounds. In the future, the DevOps team will face additional challenges as their businesses develop and technology evolves. However, with the efforts made thus far and the team's spirit of embracing diversity, we are sure that they can overcome any obstacle. We look forward to the team's future success and growth.
アバター
Introduction and Summary Hello. I am Miyashita, a membership management engineer in the Common Services Development Group[^1][^2][^3][^4] at KINTO Technologies. Today, I'd like to talk about how we solved the challenges we faced with building an S3-compatible local storage environment in our development site. Specifically, I'll share a practical approach on how to leverage the open source MinIO to emulate AWS S3 features. I hope this article will be helpful for engineers confronting similar challenges. What is MinIO? MinIO is an open source object storage server tool with S3 compatible features. Just like NAS, you can upload and download files. There is also a similar service in this area called LocalStack . LocalStack is a tool specialized in AWS emulation and can emulate services such as S3, Lambda, SQS, and DynamoDB locally. Although these two tools serve different purposes, both meet the requirements for setting up an S3-compatible environment locally. MinIO website LocalStack website Tool Selection with MinIO and LocalStack Development requirements As a development requirement, it was necessary to automatically create an arbitrary S3 bucket simply by running docker-compose, and register email templates, CSV files, etc., in the bucket. This is because it’s cumbersome to register files with commands or GUI after a container is started. Also, when conducting automated local S3 connection testing, the bucket and files must be ready as soon as the container starts. Tool Comparison After comparing which tool can easily achieve the requirements, LocalStack uses aws-cli to create buckets and operate files, while MinIO provides a dedicated command line tool, mc (MinIO Client). This made it easier to build the system. In addition, I found MinIO to be more sophisticated in the GUI-based management console. A comparison on Google Trends shows that MinIO is more popular. For these reasons, we decided to adopt MinIO. Compose Files To set up a MinIO local environment, a "compose.yaml" file must first be prepared. Follow the steps below. Create a directory. Create a text file in the directory with the filename "compose.yaml". Copy and paste the contents of compose.yaml below and save it. docker-compose.yml is not recommended. Click here for the compose file specifications * docker-compose.yml also works with backwards compatibility. For more information, click here. services: # Configure the MinIO server container minio: container_name: minio_test image: minio/minio:latest # Start the MinIO server and specify the access port for the management console (GUI) command: ['server', '/data', '--console-address', ':9001'] ports: - "9000:9000" # for API access - "9001:9001" # for the management console (GUI) # USER and PASSWORD can be omitted. # In that case, it is automatically set to minioadmin | minioadmin. environment: - "MINIO_ROOT_USER=minio" - "MINIO_ROOT_PASSWORD=minio123" # minio-managed configuration files and uploaded files # If you want to refer to the file locally or if you want to make the registered file persistent, # mount a local directory. # volumes: # - ./minio/data:/data # If you want the MinIO container to start automatically after restarting the PC, etc., # enable it if you want it to start automatically when it is stopped. # restart: unless-stopped # Configure the MinIO Client (mc) container mc: image: minio/mc:latest container_name: mc_test depends_on: - minio environment: - "MINIO_ROOT_USER=minio" # Same user name as above - "MINIO_ROOT_PASSWORD=minio123" # Same password as above # Create a bucket with the mc command and place the file in the created bucket. # First, set the alias so that subsequent commands can easily # specify MinIO itself. # This time, the alias name is myminio. # mb creates a new bucket. Abbreviation for make bucket # cp copies local files to MinIO. entrypoint: > /bin/sh -c " mc alias set myminio http://minio:9000 minio minio123; mc mb myminio/mail-template; mc mb myminio/image; mc mb myminio/csv; mc cp init_data/mail-template/* myminio/mail-template/; mc cp init_data/image/* myminio/image/; mc cp init_data/csv/* myminio/csv/; " # Mount the directory containing the files you want to upload to MinIO. volumes: - ./myData/init_data:/init_data Directory and File Structure Create an appropriate dummy file and start it with the following directory and file structure. minio_test# tree . . ├── compose.yaml └── myData └── init_data ├── csv │ └── example.csv ├── image │ ├── slide_01.jpg │ └── slide_04.jpg └── mail-template └── mail.vm Startup and Operation Check The following is the flow of running MinIO and its client on Docker and checking its operation. The Docker container is started in the background (using the -d flag) with the following command: If Docker Desktop (for Windows) is installed, containers can be created using a command line interface such as Command Prompt or PowerShell. ** Download Docker Desktop here * docker compose up -d * The hyphen in the middle of docker-compose is no longer added. For more information, click here. Docker Desktop Open Docker Desktop and check the container status. You can see that the minio_test container is running, but the mc_test container is stopped. Check the execution log of the mc_test container. MC Execution Log The logs indicate that the MinIO Client (mc) has been executed and all commands are completed successfully. Management Console Next, let's explore the MinIO GUI management console. Access port 9001 on localhost with a browser. http://127.0.0.1:9001 When the login screen appears, enter the username and password configured in compose.yaml (minio and minio123 in this example). List of Buckets Select "Object Browser" from the menu on the left. You will see a list of buckets created and the number of files stored in them. List of Files Select the "image" bucket as an example and look inside. You will see the pre-uploaded files. You can directly view the file by selecting "Preview" from the action menu next to the file. File Preview Function Our mascot character, the mysterious creature K , will be shown on preview. The function to preview images directly in MinIO management console is very useful. Installation of MC (MinIO Client) Using the command line can be more efficient than GUI for handling large numbers of files. Also, when accessing MinIO from source code during development and an error occurs, the command line is very useful for checking file paths. This section describes how to install MinIO Client and its basic operations. *If you are satisfied with the GUI management console, feel free to skip this section. # Use the following command to download mc. The executable file is stored in an arbitrary directory. minio_test/mc# curl https://dl.min.io/client/mc/release/linux-amd64/mc \ --create-dirs \ -o ./minio-binaries/mc # Operation check # Check if the installed mc is the latest version and display the version to check that it was installed correctly. # It is your choice whether to pass the mc command through Path. I will not pass through this time. minio_test/mc# ./minio-binaries/mc update > You are already running the most recent version of ‘mc’. minio_test/mc# ./minio-binaries/mc -version > mc version RELEASE.2023-10-30T18-43-32Z (commit-id=9f2fb2b6a9f86684cbea0628c5926dafcff7de28) > Runtime: go1.21.3 linux/amd64 > Copyright (c) 2015-2023 MinIO, Inc. > License GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html> # Set alias # Set the alias required to access the MinIO server. minio_test/mc# ./minio-binaries/mc alias set myminio http://localhost:9000 minio minio123; > Added `myminio` successfully. # Example of file operation # Display a list of files in the bucket minio_test/mc# ./minio-binaries/mc ls myminio/image > [2023-11-07 21:18:54 JST] 11KiB STANDARD slide_01.jpg > [2023-11-07 21:18:54 JST] 18KiB STANDARD slide_04.jpg minio_test/mc# ./minio-binaries/mc ls myminio/csv > [2023-11-07 21:18:54 JST] 71B STANDARD example.csv # Screen output of file contents minio_test/mc# ./minio-binaries/mc cat myminio/csv/example.csv > name,age,job > tanaka,30,engineer > suzuki,25,designer > satou,,40,manager # Batch file upload minio_test/mc# ./minio-binaries/mc cp ../myData/init_data/image/* myminio/image/; > ...t_data/image/slide_04.jpg: 28.62 KiB / 28.62 KiB # File deletion minio_test/mc# ./minio-binaries/mc ls myminio/mail-template > [2023-11-15 11:46:25 JST] 340B STANDARD mail.txt minio_test/mc# ./minio-binaries/mc rm myminio/mail-template/mail.txt > Removed `myminio/mail-template/mail.txt`. List of MC Commands For more detailed documentation on the MinIO Client, please refer to the official manual. Click here for the official MinIO Client manual. Lastly, Access from Java Source Code After building an S3-compatible development environment using MinIO locally, I'll demonstrate how to access MinIO from a real Java application. First, configure Gradle. plugins { id 'java' } java { sourceCompatibility = '17' } repositories { mavenCentral() } dependencies { // https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-s3 implementation 'com.amazonaws:aws-java-sdk-s3:1.12.582' } Next, create a Java class to access MinIO. package com.example.miniotest; import com.amazonaws.ClientConfiguration; import com.amazonaws.auth.AWSStaticCredentialsProvider; import com.amazonaws.auth.BasicAWSCredentials; import com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper; import com.amazonaws.client.builder.AwsClientBuilder; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3ClientBuilder; import com.amazonaws.services.s3.model.GetObjectRequest; import com.amazonaws.services.s3.model.S3Object; import com.amazonaws.services.s3.model.S3ObjectSummary; import com.amazonaws.services.s3.model.ListObjectsV2Result; import com.amazonaws.regions.Regions; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.util.List; public class Main { public static void main(String... args) { new Main().execute(); } /** *S3 compatibility test of MinIO *Obtain a list of files in the bucket and display the contents. */ private void execute() { System.out.println("--- Start ---"); // When connecting to a local MinIO, // switches when connecting to AWS S3. // Assumed to switch with the spring boot profile. boolean isLocal = true; // Since MinIO is compatible with AWS S3, // you can connect from the AWS library. AmazonS3 s3Client = null; if (isLocal) { s3Client = getAmazonS3ClientForLocal(); } else { s3Client = getAmazonS3ClientForAwsS3(); } // Bucket name final String bucketName = "csv"; // List all objects in the bucket. ListObjectsV2Result result = s3Client.listObjectsV2(bucketName); List<S3ObjectSummary> objects = result.getObjectSummaries(); // Loop as many filenames as possible. for (S3ObjectSummary os : objects) { System.out.println ("filename retrieved from bucket: " + os.getKey()); // Obtain the contents of the file in the stream. // Of course, files can also be downloaded. try (S3Object s3object = s3Client.getObject( new GetObjectRequest(bucketName, os.getKey())); BufferedReader reader = new BufferedReader( new InputStreamReader(s3object.getObjectContent()))) { String line; while ((line = reader.readLine()) != null) { // Screen output of file contents one line at a time System.out.println(line); } } catch (IOException e) { e.printStackTrace(); } // Insert a blank line at the file switching. System.out.println(); } System.out.println("--- End ---"); } /** *Click here to connect to local MinIO. * @return AmazonS3 client instance is an implementation of the AmazonS3 interface. */ private AmazonS3 getAmazonS3ClientForLocal() { final String id = "minio"; final String pass = "minio123"; final String endpoint = "http://127.0.0.1:9000"; return AmazonS3ClientBuilder.standard() .withCredentials( new AWSStaticCredentialsProvider( new BasicAWSCredentials(id, pass))) .withEndpointConfiguration( new AwsClientBuilder.EndpointConfiguration( endpoint, Regions.AP_NORTHEAST_1.getName())) .build(); } /** * Obtain an Amazon S3 client and set up a connection to the AWS S3 service. * This method uses the IAM role at runtime on Amazon EC2 instance to automatically * obtain credentials and establish a connection with S3. * The IAM role must have a policy allowing access to S3. * * The client is configured as follows: * - Region: Regions.AP_NORTHEAST_1 (Asia Pacific (Tokyo)) * - Maximum connections: 500 * - Connection timeout: 120 seconds * - Number of error retries: Up to 15 times * * Note: This method is intended to be executed on an EC2 instance. * When running on anything other than EC2, AWS credentials must be provided separately. * * @return AmazonS3 client instance is an implementation of the AmazonS3 interface. * @see com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper * @see com.amazonaws.services.s3.AmazonS3 * @see com.amazonaws.services.s3.AmazonS3ClientBuilder * @see com.amazonaws.regions.Regions */ private AmazonS3 getAmazonS3ClientForAwsS3() { return AmazonS3ClientBuilder.standard() .withCredentials(new EC2ContainerCredentialsProviderWrapper()) .withRegion(Regions.AP_NORTHEAST_1) .withClientConfiguration( new ClientConfiguration() .withMaxConnections(500) .withConnectionTimeout(120 * 1000) .withMaxErrorRetry(15)) .build(); } } Execution Result --- Start --- Filename retrieved from bucket: example.csv name,age,job tanaka,30,engineer suzuki,25,designer satou,40,manager --- End --- Source Code Description A notable feature of this code is that the AWS SDK for Java supports both MinIO and AWS S3. When connecting to a local MinIO instance, use the getAmazonS3ClientForLocal method; when connecting to AWS S3, use the getAmazonS3ClientForAwsS3 method to initialize the client. This approach makes it possible to use the same SDK across different backend environments and the same interface for operation. It is nice to be able to easily test an application before deploying it to AWS environment without incurring additional costs. I hope you find this guide helpful. Thank you for reading my article all the way to the end.‍🙇‍♂ [^1]: Post #1 by a team mate from the Common Services Development Group [ グローバル展開も視野に入れた決済プラットフォームにドメイン駆動設計(DDD)を取り入れた ] [^2]: Post #2 by a team mate from the Common Services Development Group [ 入社 1 年未満メンバーだけのチームによる新システム開発をリモートモブプログラミングで成功させた話 ] [^3]: Post #3 by a team mate from the Common Services Development Group [ JIRA と GitHub Actions を活用した複数環境へのデプロイトレーサビリティ向上の取り組み ] [^4]: Post #4 by a team mate from the Common Services Development Group [ VSCode Dev Container を使った開発環境構築 ]
アバター
Introduction Hello. I'm Kazuki Morimoto from the Analysis Group in the Data Analysis Division. I usually work in the Osaka office and handle analysis topics such as those from the retention project, credit project, used vehicle division and the MyRoute app. (I'd like to post the details separately on the Tech Blog in the future.) In this article, I'd like to share with you what I've learned from using the preview version of "QuickSight Generative BI." When I attended a AWS Generative AI Workshop that was held for TOYOTA Group companies last month, I was introduced to it and thought, "This could be quite useful!" so I gave it a try. What you can find in this article An overview of QuickSight Generative BI I made a Sales Dashboard for a Ramen Subscription service Using Generative BI Good Points and Future Improvements Contents not covered in this article How to Use QuickSight Explanation of QuickSight Q How to Get Started with Generative BI ( This one was well written.) Announcing Generative BI capabilities in Amazon QuickSight What is Generative BI? Simply put, it is a service that applies Amazon Bedrock's LLM to "QuickSight Q" to add a wider range of functions. Originally, there was a function called "QuickSight Q" that enabled users to ask questions in natural language and receive answers with graphs. Applying Amazon Bedrock's LLMs (Large Language Models) to it has empowered us to perform analyses based on natural language with a higher degree of freedom. Quicksight Q *QuickSight Q is available in the Tokyo region and many other regions in Japan. Reference I Made a Dashboard Using Generative BI Let's try out Generative BI right away. I am going to create a dashboard imagining KINTO starting a Ramen subscription service . *As a side note, KINTO is a mobility service company, not in the ramen business... Or at least not for now... Goals This time, I will refer to the "Sales" segment of the Dashboard published by CRISP, with the goal of creating the following two dashboards. Sales (by year and month) Contract plan composition ratio Pre-preparation As a pre-preparation, I will create a service overview and sample data of this Ramen subscription. Service overview I asked ChatGPT to come up with Ramen subscription plans. The plans turned out to be way better than I expected. LOL I’m sold on that! Create sample data Next, I asked ChatGPT to create sample data with the following prompt. Results It generated a CSV file with plausible sample data for real. Practice Now, I'd like to import the sample data I just mentioned into QuickSight to create charts. This time, I'm trying it out using the Northern Virginia region in our in-house sandbox environment. *As mentioned above, how to get started with generative BI is omitted. 1. Create a sales (by year and month) graph First, I'd like to create a graph of sales (by year and month). Generative BI is only available in English (as of December 11, 2023), so I'll create prompts in English. Monthly sales estimates as of December 11, 2023 Enter the following prompt into "Build a visual." Cumulative Sum(Yen) in 2023-12 After pressing Build, the following board was created in about 3 seconds. After confirming that the output is as expected, click "ADD TO ANALYSIS" to add it as a visual. Create a monthly sales trend graph by month and year Similarly, enter the following prompt. Cumulative Sum(Yen) per months This also produced a graph that was almost as expected. Here is the result of manually changing the size of the graph after adding the visual. The horizontal axis is in "MMDD, YYYY HH/hh" format, which is difficult to read, so I changed the visual. Generative BI also seems to be able to make visual changes in natural language. As you may have noticed, my English skills are not very strong, so I rely on our in-house AI-ChatBOT (Sherpa) for support. I Typed in. Apparently, the visual editing functionality is still insufficient. Even if I check the official website , it seems that there aren't many things I can do yet, so I'll modify it manually. I'm looking forward to the future. Create a monthly sales ranking table by prefecture Next, I will try to output a visual in table format. Once again, with the help of Sherpa, I entered the following prompt. Please provide the monthly fee sum yen for each prefecture in a ranking format in a table in December 2023 (Three consecutive “in”s! ) Results How impressive. It displayed exactly the table I wanted. Added to the visual (graph size, etc., has been modified manually.) 2. Contract plan composition ratio Next, I'd like to see the percentage of subscription contract plans as a pie chart. As always, I asked Sherpa to translate and entered the following prompt. total unique number of Contract ID per contract plan in pie It's perfect! I also asked to display the total monthly fee for each plan. Total monthly fee for each contract plan in table Dashboard view Let's turn the created charts into a dashboard and check it. Generative BI created everything, except for the chronological format of the line graph. It looks pretty good, right? Exective summary There seems to be a function that creates a summary based on the dashboard contents. Click on "Executive summary" from the "Build" button in the upper right corner of the dashboard screen. A summary was created in about 10 seconds. The structure seems to be a description of the entire dashboard and each chart. Although it is very simple, the content appears to be accurate. Moreover, a link was embedded in the description of each graph. For large dashboards, clicking on the summary jumps to the linked graph, which is convenient. Good Points and Future Improvements Good points It creates charts instantly by giving natural language commands. For simple graphs, it is faster than making them manually. It will guess the column name to some extent even if you do not explicitly specify it. → Conversely, it is necessary to name the appropriate columns to be guessed. Monthly totals are also done automatically. The Executive Summary can be used as a basis for the material. Future Improvements The fine-tuning function of graphs is still insufficient. It would be faster to do this by hand than by natural language. English only. (As of December 11, 2023) The input field is short, making it difficult to correct sentences. After entering a long sentence, it was a little difficult to edit the sentence when the graph I wanted was not displayed. It does not understand chronological expressions such as last 3 months. Conclusion This time, I tried the public preview version of Generative BI. Although it is not at the level of practical business use yet, I think the service shows a lot of promise depending on future updates. If we can easily and quickly visualize the current situation, we will be able to accelerate the cycle of business improvement, so I look forward to the future. (This is a bit off-topic, but I was very surprised by the high degree of completion in the sample data and illustrations created by ChatGPT.) Furthermore, there is an article about the Analysis Group provided on the Tech Blog below, so please read it if you are interested. A Look into the KINTO Technologies Analysis Group
アバター
Introduction My name is Endo and I am the FE team leader of QA Group at KINTO Technologies. I mainly check QA cases from multiple products and projects for the frontend of the Japanese KINTO website developed by KINTO Technologies, and allocate and manage tasks to each of the QA team members. While working, I often engage in conversations with people from both the development side and QA. This time, when we were chatting, a question came up: "Why do we even do QA? " . Since I've been given this opportunity, I would like to share my own thoughts on this. Is QA Necessary in Development? Before writing about the role and necessity of the QA team on the theme, "Why do we do QA?", I would like to consider whether QA itself is necessary in the first place. In some seminars and articles on quality, there is a discussion about the necessity of QA. I have heard someone say that in Agile development, QA is unnecessary if each team has a QA role within the team, rather than having a separate QA team outside of it. For example, if there is a separate QA team from an Agile team, the following concerns may arise: Trying to be flexible in responding to specification changes in a sprint is likely to affect the subsequent QA process. The separate lead time for QA to understand the specifications lengthens the QA process. When a problem occurs, it is not addressed during the sprint, leading to significant rework. From these points, I think the main idea of the discussion was that Agile development can be managed without these concerns as long as there are members in the team who can handle everything from development to testing, including the role of QA. Here, I believe it is important to note that the role of QA itself is not being denied, even though a QA team may not necessary. On the contrary, those who argue that a QA team is necessary may believe that: By having a QA team independent from the development who can make objective judgments, quality can be improved from a different perspective. Knowledge from different specifications can be easily centralized in the QA team and information can be obtained across projects. Based on this conjunction of knowledge, the QA team can provide missing considerations and necessary information for other projects running in parallel. On the other hand, challenges of inserting a QA role within an Agile team include: The difficulty to include highly skilled team members who have the knowledge to oversee everything testing-related, from development to system testing If that is not the case, then it will be necessary to train team members, but acquiring the appropriate skills takes time. Knowledge pools easily around skilled members, and when those leave the team, it becomes difficult to handle QA. I have come across an article in columns of other companies that they do not have a department or group called QA, yet they do QA work for their service users. In other words, whether to create a QA team as an organization is a choice based on development methods and organizational culture. However, I feel that everyone has a common understanding that the role of QA is necessary to check the overall quality for users based on system requirements. The Role of QA Beyond Testing When conducting confirmation and verification, QA is often seen as a specialized testing team, leading to a common misconception that QA is synonymous with testing. However, the role of QA is not only limited to testing. Here are three major aspects on this point. (1) Confirm the correct understanding of specifications When designing test scenarios, QA has to confirm system requirements and will ask questions when needed to the development and operations sides. From the point of conducting testing, QA first confirms what the “correct answers” are to the specifications, based on the requirements. Through this process, QA can point out details in the specifications, helping prevent requirement omissions and to refine the information given in the specifications. The following is an example of a case where the "Screening Application Process" section of the KINTO website was revised. In this case, the expected test result is that the design of "What is required for the application" and "Screening application steps" will match the screen specification. What is required in case of a regular web application is to upload a driver's license image. However, as KINTO services are also offered via dealerships where driver's licenses can be verified on the spot, the system should be created so that the dealership staff could skip the image upload (as indicated by the red box below). When we consulted this point with the development side, they realized that it was necessary to take it into consideration. While it may be a minor detail, we consider the expected use cases and conditions, ensuring that the correct answer aligns with the expected test results. In this way, the specifications that should be in place are confirmed by making specifics for ambiguous expressions or omissions. As mentioned above, through specification checks from the QA team, it is sometimes possible to point out requirements that the development side was not aware of. In addition, by aligning test perspectives in parallel with the development process, although not at the requirements definition stage, we can enhance quality earlier, before testing is conducted. (2) Organize documentation during test design Working on tight deadlines during development can pose challenges to organize documentation effectively. Even if each specification is available, there are cases where it may not be listed or the operating procedures may not be organized. In QA, while confirming the correct answers to the specifications mentioned in (1) above, the following aspects are identified as necessary in test design: scenarios, function confirmation, and display confirmation. In this case, the original information is in the development side, so basically the materials are used as they are, but if necessary, QA lists procedures and organize the information to facilitate what to check during tests. We sometimes receive positive comments from members outside of the QA team, such as how they were able to gain a bird's-eye view of the overall functionality by referring to the materials created by QA, or how helpful it was in confirming certain processes. Since the information is consolidated within the project, the contents of the materials compiled by QA are not necessarily the latest, but they are organized for the purpose of understanding the current specifications. In addition, since QA can observe projects running in parallel from a horizontal perspective, they can provide information on concerns to the development side while organizing specifications, which can be helpful in terms of quality. (3) Feedback from defect analysis After a project is completed, a defect analysis will be conducted based on the results of the QA testing. The following is mainly done on a project-by-project basis. Depending on the characteristics of each project and product, we offer feedback which may include requests for additional consideration during the requirement definition stage or for stronger emphasis on unit testing. By providing feedback to the project at review meetings, they will be able to use it as a reference for the progression of the next project tasks. When analyzing defects, we collaborate to clarify the defect trends for each product and identify project-specific issues that need to be addressed. It is important to report not only the weaknesses of the project but also the effects. For example, sharing a different approach to fulfilling requirements from a different project to provide insight; or if the portion of a unit test was successful it is crucial to communicate what has been effective. Our reports are to guide the initiative towards a positive direction and strengthen it further, so we are cautious to address not only negative aspects when giving feedback. Consequently, the role of QA is significant not only in testing, but also in building quality. So Why Do We Practice QA? For us who provide web and app services, for example, Design is consistent, but buttons are in different positions on different screens Screen transitions take time While there are no issues on a PC, the text becomes challenging to read on a smartphone due to changes in display size. If these issues persist, users may choose not to use the service, finding it difficult to use or view before even experiencing the appeal of the service. I believe that performing QA work is meaningful not only to confirm that the requirements are met, but also to improve a website so that visitors can use it comfortably. In addition, since system requirements can be checked horizontally through QA work, it also plays a role in discovering unexpected project risks by obtaining a certain level of specification information. As KINTO Technologies has multiple projects running concurrently for each product, Are there any risks associated with the timing of project releases? If a common specification changes, are other products that may be affected aware of the changes? By conducting a series of tests, are there any issues with the functionality, including linking data across products? Even in areas where the project side has already considered risks, we check them again during QA to identify any remaining risks and ensure built-in quality. In this way, I think it is necessary for QA to confirm the requirements from an objective viewpoint for our customers who actually use the service, and provide support for the creation of quality aspects. I believe that this mindset of "for our customers" is important. Our services are used by the majority of people. However, the customer faces the service as an individual. We must prevent a situation where the customer never visit our website again due to minor difficulties in use or view. Therefore, I believe that the mindset of “for our customers” plays an important role in creating a positive cycle. It effectively communicates the appeal of the services we aim to convey, ultimately leading to the acquisition of more customers. Conclusion In this article, I wrote my thoughts starting from the point of why we do QA in the first place. As mentioned earlier, the role of QA varies from organization to organization. If you agree with this content or if you have different opinions and are interested in QA, please feel free to contact me. Moreover, If you find the idea of doing QA together enjoyable, or if you want to make QA more exciting and better, you are always welcome! Please apply from the recruitment page below. We can start with a casual interview. https://hrmos.co/pages/kinto-technologies/jobs?category=1783696806032474116
アバター
Introduction Hello I'm Nakahara who is in charge of frontend development for KINTO ONE (Used Vehicles) at KINTO Technologies. KINTO ONE (Used Vehicles) is an e-commerce site for leasing again vehicles that were once leased through KINTO ONE before, allowing users to check information on actual vehicles in stock, complete the contract process, and manage their contracts. KINTO ONE (Used Vehicles) Website Despite the extended delivery times for new vehicles, this service comes highly recommended due to the availability of high-quality vehicles with relatively short delivery times and a ¥0 cancellation fee for mid-term cancellations. (As of December 2023, this service is available only in Tokyo and Aichi Prefecture.) Vehicle Image Issues Unlike new vehicles, a site that handles used vehicles treats each individual vehicle as a product. Naturally, e-commerce sites must display images of each vehicle. To commercialize the lease-up vehicles, KINTO ONE (Used Vehicles) takes photos of each vehicle, and stores the images in the backend vehicle management service. To display on the site, the vehicle information, including the image URLs, was obtained from the backend and built as a page in the frontend server container, while the vehicle images were obtained from the client side, from the image distribution path of the vehicle management service. --- title: Server Configuration (Overview) --- flowchart LR %%External Element User U[User] %%Group and Service subgraph GC[AWS] subgraph FE[Used Car Site Frontend] subgraph CF["Cloudfront"] end ECS("Frontend Server") end subgraph BE[Vehicle Management Service] UCAR("Vehicle Management Server") end end %% Relationship between services U --> |"Site Access"|CF CF --> ECS ECS --> |"Obtain Vehicle Information"|UCAR U --> |"Obtain Vehicle Image"|UCAR %%Group Style classDef SGC fill:none,color:#345,stroke:#345 class GC SGC Here, the vehicle images distributed by the vehicle management service were one-size JPGs, which caused the following problems. Inefficient compression Distribution at a size not related to the display size Regardless of the display size on the actual site, images over 1000px in width were distributed PageSpeed Insights , which measures site performance in practice, also pointed out several aspects related to vehicle images. PageSpeed Insights indication Ideal State The first thing that can be mentioned is the improvement of the points indicated by PageSpeed Insights. Efficient compression method of distribution Distributing images in sizes according to the display sizes These improvements not only enhance page display speed and reduce client's network traffic, but also improve server-side costs[^1]. In addition, from a development and operational standpoint, the following points can also be mentioned for achieving the ideal state. Minimize additional resources[^2] Can request image conversion settings from the frontend side Low cost and quick response Since adding resources will increase the associated management workload, it is desirable to keep the configuration as small as possible. For conversion settings, we keep in mind that the requirements for displayed images may change due to design modifications or the addition of new types of high-resolution devices. In such cases, it is desirable to be able to change the resolution and compression method of the images easily obtained from the frontend side. [^1]: KINTO Technologies mainly builds its services on AWS, but data transmission to the internet is also costly point [^2]:. "Resources" here refers to the concept that includes not only AWS resources but also external services for image conversion. Review of Image Conversion Methods Image conversion and distribution at the time of photo registration on the vehicle management service side This method can be said to meet the requirement if only a predefined one is created, but it is not chosen because it does not align with the preceding sections, "Minimize additional resources" and "Can request image conversion settings from the frontend side." Use of external services There are various CDNs with image conversion functions, including imgix, but we did not select this one either. Based on site usage, the US$200-300 per month plan seemed to fit. In addition, it is likely to require some workload to change the settings for the current image management storage and to make internal adjustments for this. Although it is good that we do not have to spend workload for monitoring by using an external service, we did not choose this because KINTO ONE (Used Vehicles) is still a developing service, and it is difficult to estimate its implementation effect in terms of monetary and time costs. Remote image optimization of Next.js The method that requires the least workload to implement is to optimize images using the Next.js Image Component. However, we decided not to choose this method because it does not suit the current site situation. Currently, we use Next.js as a framework for site distribution by server-side rendering, but the server is running on a relatively small instance configuration. However, due to the high number of images per page, the processing load spikes. It seemed necessary to increase the instance size at the minimum load. In fact, the site sometimes went down when the same settings were applied at the beginning of the service. We opted against this method to avoid increasing the instance size just for image conversion. Build our own method with Lambda@Edge (★ chosen option) This method is used in combination with Cloudfront to execute a function that performs image conversion on the edge side and distribute the converted images. The site itself was delivered via Cloudfront, so it seemed like it could be implemented quickly with just a few additional settings. The cost of the conversion process is almost minimal due to the number of images, and compressing the image size can further reduce the transfer cost. It takes a little time and effort to implement, but since the server-less operation is available from then on, so as long as the number of concurrent executions is taken care of, it did not seem to require that much time and effort to manage. Therefore, we decided to use this method. Adding the Image Conversion Function in Lambda@Edge Change to the following configuration: --- title: Server Configuration (after amelioration) --- flowchart LR %% External Element User U[User] %% Group and Service subgraph GC[AWS] subgraph FE[Used Car Site Frontend] subgraph CF["Cloudfront"] LM["Lambda@Edge<br>Resize/WebP Conversion"] end ECS("Frontend Server") end subgraph BE [Vehicle Management Service] UCAR("Vehicle Management Server") end end %% Relationship between services U --> |"Site Access"|CF CF --> ECS ECS --> |"Obtain Vehicle Information"|UCAR U --> |"Obtain Vehicle Image"|CF LM --> |"Obtain Vehicle Image"|UCAR %% Group Style classDef SGC fill:none,color:#345,stroke:#345 class GC SGC We added a behavior when accessing a vehicle image path to return the result of resizing and converting to WebP using Lambda@Edge. Regarding the processing contents of Lambda@Edge, there is a lot of information such as Precedents ^3 and AWS official guide ^4 , so I won't delve into details. Instead, I'd like to touch on some key points in this implementation. Specifying image conversion contents with query parameters The image conversion settings should be able to be specified with the following query parameters. query Description width Specify the width of the image after resizing quality Set the image quality during conversion This allows the site to retrieve images of the size the site side wishes to display at the specified size and quality. Cache settings Set the query parameters above as cache keys. Failure to do so may result in caching the previously generated image, resulting in displaying a smaller image when a larger one is requested. Custom loader next/image settings Once image conversion is enabled on the Cloudfront side, it is necessary to implement it on the page side as well. Since this service is built on Next.js, it is possible to request the optimal image size for the displayed size by setting a custom loader for the Image Component, without need for detailed settings. Note that the sizes property setting is important to select the optimal size for display. By setting this value, the srcset property will be configured appropriately for the actual display size when rendering the Image Component. import Image, { ImageLoaderProps } from "next/image"; // components for vehicle image export function CarImage({ src, sizes, alt = "vehicle Image", className = "" }) { // custom loader function optimizeLoader({ src, width, quality }: ImageLoaderProps) { return `${src}?width=${width}&quality=${quality}`; } return ( <Image className={className} src={src} alt={alt} sizes={sizes} quality={90} loader={optimizeLoader} loading="lazy" /> ); } By creating and using such a component, images can be requested with parameters for image size according to the display size on the site, and resized and WebP converted images can be displayed. Results We have confirmed that image optimization using this method has successfully eliminated the image-related indications in PageSpeed Insights. PageSpeed Insights image-related indications have been resolved. Conclusion This time, we attempted to optimize the display images as part of the site performance improvement. However, there are still a lot of performance issues. We will continue to improve site performance and strive to provide a more enjoyable experience for users.
アバター
はじめに こんにちは、今年 1 月に入社した yuki.n です! 2023 年 12 月入社のみなさまと今年 1 月入社のみなさまに入社直後の感想をお伺いしました! KINTO テクノロジーズに興味のある方、そして、今回参加くださったメンバーへの振り返りとして有益なコンテンツになればいいなと思います。 星野 自己紹介 1 月にできた新設部署であるモビリティプロダクト開発部の副部長として入社した星野です。 これまでテクニカルな目線でサービスを作る、運営する、ということを仕事にしてきました。 所属チームはどんな体制ですか? 配下に 4 チームあり、① 自社メディア担当、② インキュベーション案件担当、③ 販売店向けツール開発担当、④ 販売店向けツール企画担当といった感じです。社員数は 2 月時点で 23 名で、エンジニア中心ですが、プロデューサー、ディレクター、デザイナーもいるので、事業を回せる体制です。 KINTO テクノロジーズ(以下 KTC)へ入社したときの第一印象は? ギャップはありましたか? ちゃんとしている! 部署説明だけでなく、商流説明、ビジョン、中長期計画といった、中途社員中心だからこその同じ方向を向くためのオリエンテーションが充実しているのは素晴らしいと思います。 現場の雰囲気はどんな感じですか? 年齢層は二十代〜四十代と幅広いメンバーがいるのにもかかわらず、みな和気あいあいとした感じです。社歴が長いメンバーが多いのかと思いきや、入社してまだ半年経っていない社員も多く、新しい人を向かい入れることへの寛容さを感じます。働き方も多様で、他の部署に比べてリモート勤務の頻度が高いようです。 過去の経歴も様々なので、チャレンジしたい人にはとてもよい部署だと思います。気になる方はぜひ採用担当にご連絡を! ブログを書くことになってどう思いましたか? とても良い取り組みだと思います。情報発信ができる組織は採用競争力を有利にしますからね。 【Romie さんからの質問】サービスを作って運営することは、スタートでつまづくと後から取り戻すのがすごく大変な印象です! そこで最初に着手するにあたって見逃してはいけないポイントや大切にしているマインドを教えてください。 サービスは使われ始めてからがスタートで、そこから価値が発生し、育てていく必要がある、という認識が重要ですね。簡素に表現すると「運用し続ける体制を考慮する」ということでしょうか。ただ新規性のあるサービスは受け入れられない場合もあるので、スタート時点ではマスト要件だけに絞って、最小限でスタートするのも大事かなと思います。サービスなので、スタートした以上はトラブルよりも何よりも「(利用者にとって)サービス終了になる事態」が最も避けるべきこと、という意識も大事で、持続性/継続性についてはプロダクトオーナーとしっかり握っておくと良いと思います。 Choi 自己紹介 12 月に入社した KINTO ONE 開発部 新車サブスク開発 G の Choi です。 これまで色んなウェブサービスのフロントエンド・バックエンドの開発をやってきました。 所属チームはどんな体制ですか? コンテンツ開発チームとして私含め9人の体制となっています。 ほとんどがフロントエンドのエンジニアです。 KTC へ入社したときの第一印象は? ギャップはありましたか? 入社後のオリエンテーションがしっかりあったので、システムがよくできていると感じました。大手会社のシステムと若い IT 会社とのハイブリッドな感じですごくいい印象でした。 エンジニアの方々がベテランで、かつ新しい技術に探究・勉強していることが第一印象でした。 現場の雰囲気はどんな感じですか? 入社して1か月は分からないことが多かったのですが、チームの皆さんが親切で業務上の質問などに教えてくれていました。 勤務先の大阪オフィスはまだ 30 人近くの少人数で、他の部署の方とコミュニケーションがよく取れています。月1回情報共有会でオフィスのメンバーと LT 会、オフィス環境の改善に関して意見共有などを行っています。 ブログを書くことになってどう思いましたか? 日本語を書くのが苦手なので少し不安でしたが、この2か月間の自分の振り返りができたのでいいと思います。 【星野さんからの質問】フロントエンドエンジニアとして「これは秀逸!」と思ったアプリがあれば教えて下さい。 最近のフロントエンドは技術進歩のスピードが早い感がありますね。UI/UX もユーザーフレンドリーになっているサイトが多いです。特にこれはすごい! と思ったアプリはないですが、私はフロントエンドだけではなくバックエンドやアプリ開発の経験もありまして、その観点で最近 Flutter や React Native などでプラットフォームの制約なく作成できるのに興味があります。既に公開されて何年か経っていますが、私が最初にアプリ開発した際は Android、iOS、ウェブアプリを別々作らないといけなかったのでその工数がなくなるのはエンジニアとしてすごく助かる! と思います。 YI 自己紹介 プロジェクト開発部 オペレーションシステム開発 G の YI です。 前職は SIer で業界・業種問わず、またフロント/バックエンド問わず様々な開発プロジェクトで主に B2B のシステム導入に関わるプロジェクトに従事してきました。 KINTO ONE 中古車に関連したバックオフィス業務で扱うシステムの開発をしています。 所属チームはどんな体制ですか? 中古車システム T として 5 名、その他 BP さんが 10 名程度の体制となっています。 KTC へ入社したときの第一印象は? ギャップはありましたか? 高価なソフトウェアライセンスの購入が景山さんへの Slack 承認だけで進み、翌日には使えるようになっていたのには驚きました。 現場の雰囲気はどんな感じですか? 同年代の方々が多い印象なのと、様々なバックグラウンドを持つ方がいらっしゃるという雰囲気を感じています。 ブログを書くことになってどう思いましたか? 実は Tech Blog は入社前から見ていたのでこの企画についてもなんとなく知っていたのですが、改めて書くとなると「来たか…!」という気持ちでした。 【Choi さんからの質問】社内で業務外でしたいアクティビティーがあれば教えてください(趣味・スポーツなど)。 高校の部活からやっていたテニスは「ktc-テニスクラブ」、あとは「ゴルフ部」「自動車部」等をやってみたいと思います。普段の業務では中々関わることのない方々と「横のつながり」ができることはとてもよいことだと思っているので、色々と楽しみたいと思っています! HaKo 自己紹介 データ分析 G 分析プロデュース T の HaKo です。 調査会社や小売通販会社でリサーチやアナリストをやってきました。 人はどういう思いでサービスを利用したり、何を思って利用しているのか、を知る事に面白さを感じています。 所属チームはどんな体制ですか? リーダーと自身を含めて総勢 9 名。 細分化されていたチームが集約してひとつのチームになった歴史有り。 KTC へ入社したときの第一印象は? ギャップはありましたか? 年齢層高めの環境に在籍する事が多かったので、ガチガチな「御作法」が少なくて感動しました。 現場の雰囲気はどんな感じですか? 皆それぞれ得意分野・専門領域を持っていると感じており、多種多様な刺激を得られる現場だと感じています。 ブログを書くことになってどう思いましたか? ブログを書くこと自体が初めてなのですが、遠い昔に mixi で徒然なるままに日記を書いていた頃を思い出しました。 【YI さんからの質問】KTC に入社したことで変わったことはありますか? 入社早々に引き継ぎ案件が多かったのですが、これまで主戦場としてきた販促企画や分析からではなく、メルマガ配信の仕組みなど技術的な側面から入っていくようになった気がします。 yuki.n 自己紹介 KINTO ONE 開発部 新車サブスク開発 G の yuki.n です。 今年1月にフロントエンドエンジニアとして入社しました。大阪配属です。 フロントエンドに限らず、いろいろ携われたら嬉しいな、と思っています。 所属チームはどんな体制ですか? 新設されたチームということもあり、現時点ではチーム内外問わず、わたし含めて4名で動いています。 KTC へ入社したときの第一印象は? ギャップはありましたか? 入社時のオリエンテーションだったり、会社のルールだったり、いろんなところがとてもしっかりしていてびっくりしました。 あまりない経験だったこともあって、ものすごく新鮮でした。 現場の雰囲気はどんな感じですか? 居心地がよく、いい意味での落ち着きを感じています。自分以外のメンバーはみなさん東京にいらっしゃるのですが、コミュニケーションについて特に壁を感じず、気兼ねなくやり取りさせてもらえています。また、自分の「やってみたい」を受け入れていただけるなど、かなり自由にやらせてもらっているので、ありがたい気持ちでいっぱいです。 ブログを書くことになってどう思いましたか? 会社でのブログは初めての経験なので書くこと自体緊張ですが、すごく良い取り組みだと思いました。 【HaKo さんからの質問】KTC に入社して驚いた事や感動した事を教えてください。 「現場の雰囲気」のところと重複しますが、入社直後なのに、すでに自分の「やってみたい」を受け入れていただけていることです。びっくりと感動を同時に味わいました。 きーゆの 自己紹介 プロジェクト開発部プロジェクト推進部のきーゆのです。KINTO FACTORY のフロントエンド開発を担当しています。室町オフィスに勤務しています。 所属チームはどんな体制ですか? 私含め 6 名でフロントエンドの開発をしています。チーム内の最年少エンジニアという肩書きを欲しいままにしています。なんなら KTC 全体で見ても最年少の部類かもしれない。 KTC へ入社したときの第一印象は? ギャップはありましたか? いい意味でゆるいなという印象でした。ギャップは特になく、期待通りのゆるさで幸せです。これをやりたい! をどんどん受け入れてくれるところが素敵です。 現場の雰囲気はどんな感じですか? 「アットホームな井の中の蛙」が自チームの特徴です。チーム内コミュニケーションが活発で個人を尊重しあえる一方で、内向的で対外的な影響力に改善の余地があるチームです。 という結論がストレングスファインダーでわかりました。私も入社後とても手厚く迎え入れていただいたので、すぐに馴染める雰囲気です。 ブログを書くことになってどう思いましたか? 前職でもテックブログをお願いされたことがあったので、特に気構えるようなことはありませんでした。量産型シャイボーイなので自己開示は不安ですが、少しでも興味を持ってもらえたら嬉しいです、弊社に。 【yuki.n からの質問】技術面でいま興味のある・追いかけていることについて教えてください! ChatGPT 等のアウトプットを最大限に高めるための”プロンプト力”という分野を追いかけています。KINTO テクノロジーズで活用されている”しぇるぱ”を利用する際にも、この分野が活きています。 K 自己紹介 プロジェクト開発部プロジェクト推進部の K です。 Salesforce 開発を担当していて、室町オフィスに勤務しています。 前職は SIer で業種問わず、マルチクラウドのシステム導入に従事してきました。 所属チームはどんな体制ですか? Salesforce チームとして 4 名、その他 BP さんが 10 名程度の体制となっています。 KTC へ入社したときの第一印象は? ギャップはありましたか? 技術系の勉強会が多いことが第一印象でした。 現場の雰囲気はどんな感じですか? ベテランのエンジニアの方が多くて、皆新しい技術を積極的に勉強していることを実感しました。 ブログを書くことになってどう思いましたか? これから KTC テックブログを執筆することになりましたので、良い経験になるかと思います。 【きーゆのさんからの質問】開発する上で大切にしているマインドを教えてください! 技術の進化やプロジェクトの要件の変化に対応できる柔軟性が必要で、新しい状況に適応し、柔軟に対処することが重要だと思います。 問題が発生した際に冷静に対処し、効果的な解決策を見つける能力が求められます。ルーチンな問題解決だけでなく、創造的な解決方法も模索することが大切だと考えています。 向井 (mt_takao) 自己紹介 12 月入社の向井 (mt_takao) です。前職では、タクシー配車アプリの toB 向けプロダクトの(デジタル)プロダクトデザイナー兼プロダクトマネージャーを主に担当していました。 KTC でも、前職同様、プロダクトデザイナーとして、トヨタ販売店向けプロダクトのデザイン開発全般を担当しています。 所属チームはどんな体制ですか? 所属部署は、モビリティプロダクト開発部 オウンドメディア&インキュベート開発グループ DX Plannning チームです。 トヨタ販売店が抱えている課題やお困りごとをデジタル・テクノロジーの力を活用して解決を目指していくことをミッションに日々取り組んでいます。 KTC へ入社したときの第一印象は? ギャップはありましたか? 入社時のオリエン等のオンボーディングが想定していた以上に整備されている印象を受けました。 組織課題などは入社前に何度もお話を聞く機会をいただき、十分に把握した上で入社の意思決定をしたので、大きなギャップはありませんでした。 現場の雰囲気はどんな感じですか? 自分が所属している DX Planning チームは、組織が出来てまだ若く、最近入社した方が多いのですが、それぞれがこれまで経験してきたものを活かし、泥臭く推進していく姿勢に共感しています。 ブログを書くことになってどう思いましたか? 個人としても組織としても発信力強化は課題と捉えており、その機会をいただいたことに感謝しています。 【K さんからの質問】UI/UX 観点で最高と思ったデザインがあれば教えてください。 最高のデザインと言うと結構難しいのですが、最近注目しているものと言えば、「 Apple Vision Pro 」です。既に、AR や VR と現実世界を拡張したようなテクノロジーが世の中に浸透し始めているようですが、いよいよ空間自体でアプリケーションを実行できるそんな世界がやってきたのだと感じました。 参考 Apple Vision Pro 実機レビュー。「空間全部を仕事に使う」世界がやってきた まだ、アメリカでしか体験できないようなので、日本で触れるようになったら体験したいと思います。 余談ですが、Microsoft 社の未来を描いた「 Productivity Future Vision 」も、Apple Vision Pro が実現する世界に近いと思うので、ご興味あればご覧ください。 Romie 自己紹介 2023 年 12 月に入社した Romie です。プラットフォーム開発部モバイルアプリ開発 G 所属です。 組み込み →Web 系からのモバイルアプリ開発なのでまだまだこれからです。 所属チームはどんな体制ですか? iOS/Android と分かれていて Android 側ですが私を入れて 5 名です! うち 3 名が外国の方です。多国籍です。 KTC へ入社したときの第一印象は? ギャップはありましたか? 最新の技術を積極的にキャッチアップしていてスピード感に驚きました。 会社のバックアップはしっかりしていて、社風は思った以上に自由で感激しました。 現場の雰囲気はどんな感じですか? お互い遠慮せずに意見を言えて安心して仕事ができる感じです。バックグラウンドがそれぞれ違いますが、フラットな関係性でバランスの取れたチームだと感じます。 ブログを書くことになってどう思いましたか? アウトプットは日々の振り返りにもつながりますし、コツコツ情報発信しているとそれだけ注目度も上がりますからこれからもやっていきたいと思いました! 【向井さんからの質問】KTC or モビリティ領域で、どんなことを成し遂げたいですか? 私はモバイルアプリ開発の担当者ですので自分が任されているアプリを通じて KTC ひいてはモビリティ領域に貢献したいです。そのためには絶えず技術をキャッチアップし、目の前のプロダクトの成長・発展に取り組みたいと思います。 さいごに みなさま、入社後の感想を教えてくださり、ありがとうございました! KINTO テクノロジーズでは日々、新たなメンバーが増えています! 今後もいろんな部署のいろんな方々の入社エントリが増えていきますので、楽しみにしていただけましたら幸いです。 そして、KINTO テクノロジーズでは、まだまださまざまな部署・職種で一緒に働ける仲間を募集しています! 詳しくは こちら からご確認ください!
アバター
Introduction Hello everyone! This is Mori from the Global Development Division and the Tech Blog operation team. I usually work as the Product Manager (PdM) for the Global KINTO Web and the lead for compliance with personal information-related laws in various countries. (Well, I'm doing all kinds of things lol) To share some updates from me, the team I lead, the Product Enhancement team changed its reporting line as of July due to shifts in our organizational structure. (The team name has also changed a little! ) The new group manager for us is Mizuno-san. There has been some communication in the past, but not too much interaction with him and the other teams under Mizuno-san, so we took this opportunity to conduct a "Leadership Integration" workshop with the group manager and team leaders. Today, I would like to share its report📝 It was held in August, but my writing was slow and this article was released at the end of the year😅 What is Leadership Integration? Leadership Integration serves as a framework for communicating between leaders and team members to enhance team cohesion. It is effective when onboarding a new leader or when addressing team relationship issues that require improved team cohesion. Reference: An Encouragement of Leadership Integration I became aware of this workshop when I heard that it was conducted by the Corporate IT Group. Apparently, they had experience in their previous jobs. It wasn't exactly the same as "when onboarding a new leader," but it was right around the time I joined the new reporting line and I hadn't had much interaction with other teams, so I thought, "Why not?" I approached my manager and he readily agreed, so I asked Zushimi-san from Corporate IT Group to facilitate the workshop✨ Leadership Integration Flow Date & Time: 17:30-19:30 August 29, 2023 Place: Jimbocho Office Participants: Global Product Introduction Group, Global Development Division 1 manager (hereinafter “leader”) 5 team leaders (hereinafter “member”) Facilitator: Zushimi-san from Corporate IT Group Time Action Time (m) 17:30 Explanation of initiatives / Facilitator Introduction 5 17:30 Opening from the leader 3 17:33 (Leader leaves the room) 17:35 Write down what members know about the leader 15 17:50 Write down what members don't know about the leader 15 18:05 Write down what you want the leader to know 15 18:20 Write down what members can do for the leader and the group 15 18:35 (Members leave the room, and leader enters the room) 18:40 Time for the leader to look at the opinions expressed by everyone and think about the answers 10 18:50 (Members return to the room) 18:55 Response time from the leader! 30 19:25 Buffer and free talk 5 20:00 Get-together Introduction As introduction, the facilitator and the leader gave respectively an explanation of this workshop and its background. This time, Mizuno-san said, "I'd be happy to receive opinions from everyone in a place different from the usual one-on-one meetings." And then, surprisingly, he left the room! But this is when it really starts. Writing Time by Members It’s when members write down what they know, what they don't know, what they want the leader to know, and what each member can do for the leader. I can't share too much here because some of the content is private, but I will try to mention a few things that came up (Sorry, Mizuno-san). What Members Know About the Leader📝 When he joined KINTO Technologies, he was first in charge of the Used Vehicle projects in Japan. He then joined Global about a year ago. He likes cats.🐱 He likes driving,🚗 etc. Although not for the other leaders, he was a new manager to me, so I often thought, "Oh, I didn't know that about him" during the workshop. What Members Don't Know About the Leader📝 The reason for joining KINTO Technologies. Career to date Development experience 💻 What he evaluates, etc. Here, on the contrary, I had the impression that "surprisingly, everyone is thinking the same thing." What Members Want the Leader to Know📝 The want to communicate more. Interest in getting involved in R&D. The struggle with member management. Interest in going camping with everyone⛺, etc. Since the workshop was held on our assessment period, and we had all become evaluators for the first time due to the organizational changes, we found that we were all struggling with the same issues. What Members Can Do for the Leader and the Group📝 Knowledgeable about vehicles and the automotive industry. Able to listen to leader’s complaints. Having connections with other development teams Able to plan study sessions for engineers. Happy to give new business ideas, etc. I was surprised, in a good way, at the number of things that each of us could do. Response Time from the Leader Then the leader joined and answered each of the sticky notes in place. Without knowing who wrote what, the leader shared his thoughts. As it is not every day you get a chance to hear what a manager thinks, 30 minutes passed in no time. We were able to hear about his personal life, such as his love of cats despite his cat allergy (poor thing!), that he has a closet full with only white t-shirts and black pants (I think I’ve heard a similar anecdote somewhere...), as well as his opinion about the members, such as the importance of what has changed since the beginning of the term, and wanting members to do their best without being demotivated. In particular, when it came to “what members want the leader to know” and “what members can do for the group”, as there were a lot of enthusiastic notes, the leader’s comments included, "Let's share evaluations and organizational concerns with each other," "Let's work together to solve communication challenges," and "I understand that I can leave more to them." Somehow, the impression that the team was united made me feel passionate. The wall was covered with this many sticky notes! Sorry about the mosaic Impressions I participated as a member, and it was great to hear what each team leader was thinking, not to mention the deeper connections I made with the manager. Since some projects do not give the opportunity to work with many people and not everyone was familiar with all participants, this event was a good opportunity to check if we were on the same page. I discovered that all participants were thinking seriously about KINTO Technologies and about Global KINTO, more than I had imagined. Voice from Participants: It was good to know about the challenges other team members were facing. I was able to see the depth and breadth of what I want and know about my leader and compare it objectively with other members. In order to create an atmosphere that is easy to talk, I tried not to get angry or show negative emotions. I discovered that my approach may not have been entirely wrong. It might be a good idea to do a look back or something a few months later to check on the status of what was agreed on the workshop. Even things you don't dare to tell sometimes hold meaning for others. This kind of workshop is effective to bring these things out. For myself, the timing was good because I was new to the group, but others seemed to have a sense of "why now?" I regret not having considered the timing of the workshop. However, there was no opinion that it was completely meaningless, so I felt that it would be even more effective at times such as when a new leader takes office! Conclusion Throughout the workshop, I felt that the results can be greatly influenced by facilitation, such as time allocation or talking to participants while they are writing. For example, when you can't write your thoughts down very well, the facilitator suggested "Have you thought about XXXXX?", or gave good remarks such as "Right, this is also a new discovery, isn't it?" He also considered the allocation of time in a flexible manner. Thank you, Zushimi-san, for coming all the way to Jimbocho🙏 Lastly, if you are going to conduct a Leadership Integration workshop, here is a book that was recommended to me. The power of facilitation is truly amazing when done right! 🥺✨ @ card This article has become quite long, but I'd like to close with a picture of the huge, delicious lamb we enjoyed at the get-together🍖🤤 Thank you very much🙏
アバター
Introduction to Istio for Non-Infrastructure Engineers Hello. I'm Narazaki from the Woven Payment Solution Development Group. We are involved in the development of the payment infrastructure application used by Woven by Toyota at Toyota Woven City , and are developing payment-related functions across the backend, web frontend, and mobile application. Within this project, I am mainly responsible for the development of backend applications. The payment backend we are developing contains microservices and runs on a Kubernetes-based platform called City Platform. In this article, I would like to introduce you to Istio , a mechanism to set up microservice networks on Kubernetes. My aim is to explain its purposes and functions in an easy-to-understand manner for backend application engineers who are used to writing business logics or code. I hope this article will help you deepen your understanding of configurations using Istio, that it could be useful when isolating the issue causes during troubleshooting, and facilitate smooth communication with infrastructure and network engineers. What is Istio? With how the architecture of microservices work, their processing span multiple services, resulting in the need of communication cost between these services. As application engineers, we often think that it doesn't matter as long as it connects, but infrastructure engineers would want to effectively control the network layer. That is why Istio was created with the aim to centralize declarative management of various settings such as network routing and security, similar to Kubernetes Manifests, and to provide integrated operational monitoring of network status. Because the network is structured like a mesh, these functions are collectively referred to as the service mesh. Istio Architecture: Data Plane and Control Plane First, it's essential to understand the architecture of Istio. Like Kubernetes, Istio is divided into a control plane and a data plane. Kubernetes is a control plane that receives API requests from Kubectl, etc., and controls resources such as pods, etc., and a data plane is the pod where the application actually runs. Istio’s data plane employs a network proxy called Envoy . If necessary, the control plane injects Envoy as a sidecar container next to the container where our code runs. Control Plane and Data Plane Why Do We Need Istio? Envoy is a network proxy application that can run independently. The configuration items are so varied that configuring a single Envoy instance as intended is not easy. (At least for non-infrastructure engineers! ) In a complex microservices architecture, the network is like a mesh connecting inside and outside the cluster, requiring the configuration of numerous Envoy Proxies. It is not difficult to imagine how difficult it would be to set up individually and make everything work the way you want it to. Resources Configurable in Istio The following features will be available by introducing Istio: Traffic management (service discovery, load balancing) Observability (logging, distributed tracing) Security such as authentication and authorization On the other hand, for backend application engineers, there have been many situations in our experience where each of them is a black box, not knowing what is actually configurable or which configuration file to look at when encountering unintended behavior. Let's take a look at some specifics of what Istio's configurable resources mean. Gateway There are two resources related to Kubernetes networking: Ingress and Egress. Istio intercepts communications with an Envoy proxy called gateway. It is literally a gateway to the Istio network. You can set it up in the following file: Although this file itself rarely contains detailed settings that application engineers should know, it is often referenced by other files in the form of gateways , so make sure that the Gateway is properly configured first. apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: test-gateway spec: selector: istio: ingressgateway # LoadBalancer service available by default when Istio is installed servers: - port: number: 80 # listening port name: http protocol: HTTP # allowed protocols hosts: - "*" # host name Virtual Service Kubernetes has a mechanism called Service that allow deployment and StatefulSet to be accessed from the intracluster network. On the other hand, Istio's Virtual Service defines the route to the Service. While this is powerful as it allows for the definition of a very large number of configuration values, caution must be taken to avoid duplication with other settings. If a request for the service is not received, there may be a mistake in the Virtual Service configuration The istioctl analyze command may tell you about configuration errors, so let's take a look. apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: test-virtualservice namespace: test spec: hosts: - "*" # specified host name. This means that the following rules apply when this host name is specified. If *, the rule applies to any host name gateways: - test-gateway # specify the gateway above Multiple specification allowed - mesh # define 'mesh' here to allow intracluster communication without Gateway http: - match: # rules can be written for filtering requests - uri: prefix: /service-a # URI pattern. Regex, etc., can be selected. route: - destination: host: service-a # destination service port: number: 80 - match: # multiple routing rules and connections can be defined - uri: prefix: /service-b route: - destination: host: service-b port: number: 80  exportTo: - . # where is this rule applied? Kubernetes namespaces.If (dot), only in the namespace where this rule is set Authorization Policy Communication between specific services can be controlled. Specifically, protocols, routing to specific paths, specification of HTTP methods, etc. can be modified in detail, so there may be many opportunities for application engineers to configure them. On the other hand, misconfigurations of rules and numerous unexpected pitfalls are common, so be sure to run tests after configuring. apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: access-allow-policy namespace: test spec: selector: matchLabels: app: some-application # label on the pod action: ALLOW # permission rule rules: - from: # define the source of the request - source: principals: - cluster.local/ns/test/sa/authorized-service-account # Kubernetes service account to: # request receiver definition - operation: methods: ["POST"] # HTTP methods allowed paths: - "/some-important-request" # Permitted endpoints --- apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: deny-policy namespace: test spec: selector: matchLabels: app: some-application action: DENY # example of denying a request rules: - to: - operation: paths: ["/forbidden-path"] Other Settings Destination Rule Service Entry Peer Authentication Envoy Filter and other configuration files, but I will omit the discussion here. Basically, just like the Kubernetes resources, each schema is defined with its own configuration items. If you have a resource that your team uses, it is a good idea to check the documentation once to see what items can be configured. Specific Examples of Common Debugging and Troubleshooting First of all, make sure there are no glitches in the configuration. If you run the istioctl analyze command, most misconfigurations will be reported as errors. If RBAC is enabled, such as in a production environment, and there are constraints on Istio-related resources, have an authorized infrastructure engineer perform them. If there is no misconfiguration that cause errors, check to see how far the request has reached. Let's look at the application or sidecar logs to see if communication is broken at the gateway or up to the application pods. If it appears to be passing through the gateway, it is a good idea to check the sidecar container logs on the namespace of the pod that should be accessed, such as kubectl logs pod <pod-name> -c istio-proxy -n <namespace> . For intracluster communication, you can run curl on a container, but since recent Docker base images often do not contain applications that are not needed to run container applications, attach a container for debugging such as k debug <pod-name> -n <namespace> -it --image=curlimages/curl:latest -- /bin/sh and see if you can resolve names in the cluster. If communication is being blocked, check the Virtual Service file. If there is a problem with authentication, refer to the Authorization Policy file to locate the misconfiguration. Routing and authentication are the areas where items configured in multiple layers are easily conflicted. You can list what authentication rules are applied to a pod with the istioctl x authz check <pod-name>.<namespace> command. In addition, what seems like a network error at first glance often turns out to be an implementation problem. At the same time, the implementation side should also review the network and authentication/authorization settings. The following is what I do when I run into network-related errors. Isolate the causes by running the istioctl analyze command or checking the logs to see if the Istio configurations are incorrect. Check the network communication from inside and outside of the cluster using curl and kubectl debug commands. Check the application configuration, such as whether the deployed application listens for requests at the port specified by the infrastructure layer. Check the request to see if the client application implements the required authentication and authorization mechanisms. These can also be checked for misconfiguration and communication status via GUI if the observability stack settings such as Kiali are enabled. Conclusion By learning about the specific configurable items and their meanings, I hope you gained insight into some of the functions that were black-boxed. Also, some of you may have realised that the configuration items are surprisingly simple. On the other hand, I believe the difficulty of Istio is not in the network configuration itself, but rather at the production operation phase, such as ensuring continuous stable operations (applying version patches and verifying the operations each time). As a backend application engineer, I would like to further understand the behavior of Istio and test the application's performance under actual operational conditions.
アバター
Introduction Hello. I am Ito, and I do backend development and operation of the KINTO FACTORY service (hereinafter, FACTORY) at KINTO Technologies. As part of the Advent Calendar series about FACTORY, I will write about how we improved its master data management. About the Master Data Management of KINTO FACTORY Various types of information are stored as master data, including vehicle models, products managed by KINTO FACTORY, and details of dealerships capable of handling vehicle modifications offered by KINTO FACTORY. *Product prices are as of December 11, 2023. Albeit only in Japanese at the release of this article, you can check the KINTO FACTORY website for the latest prices. The base information is provided by Toyota and its dealers in Japan, and the planning department enters it into Excel. After that, the Excel file is shared with the development team, converted to a CSV file, and registered in FACTORY. It's a pretty hard operation When it first started, there was a lot of frustration... Saving nearly ten Excel files as CSV files, converting newline characters, and deleting BOM was all done manually and took time and effort Excel functions were used to check the entered data, which made files heavy to work with... It was hard to make an Excel function for checking, so they needed to be checked by humans Item lists were long, some data was duplicated, and there were also input mistakes It required repeated checks and revisions in a checking environment, and it took a lot of workload Improvement was needed We made the following improvements to get rid those annoyances. Review of the Excel Input Format First, we reviewed the Excel format. Removed unnecessary items that were created for potential future uses Removed duplicates or items determined by other input values Added assisted input with Excel functions We reduced the burden of inputs by removing unnecessary items and reduced input errors by about 75% by using Excel functions and input rules. Automating the Excel-to-CSV conversion Next, we developed a tool to convert Excel files into CSV files in Golang. We Automated the Excel reading, checking, and converting to CSV The number of Excel functions was optimized by using a tool to do checks that were done by Excel functions The tool developed in the Go language supports minor revisions even when items are added to an Excel file By automating Excel-to-CSV conversions, we reduced the amount of effort needed and manual operation mistakes, and made it possible to complete tasks that used to take almost a day in just minutes. By making the tool do all of the work that we did mabually in Excel, we were finally free from having to dedicate our time into just being Excel experts! (Personally, this was my favorite part) Summary In this article, I shared about how we improved the master data management of FACTORY. We reduced out time spent reviewing Excel formats, automating conversions to CSV files, and reducing input mistakes. However, we still are facing issues. In particular, communication between planning and development takes up time no matter what we do, so we are thinking of changing the input-to-verification environment in order to streamline the checking process. We will continue to improve our master data management and provide an even better service. Conclusion KINTO FACTORY is looking for new partners. Please check the job listings below if you're interested! @ card @ card
アバター