TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

969

記念となる1組目のご依頼 学びの道の駅 運営事務局のHOKAです。 2024年2月、KINTOテクノロジーズのメンバー全員が参加する月次MTGで、「 学びの道の駅、はじめます! 」という案内をしましたところ、モバイルアプリ開発グループiOSチームの中口さんから「勉強会について相談したいです」というご相談をいただきました。 モバイルアプリ開発Gの勉強会に関するご相談 私たちにとっても初めてのお問い合わせ。 iOSチームのチームリーダー4人と事務局メンバー3人で早速MTGしてみたところ、 iOSチームの底上げを目的 に、2023年6月から週一回開催しており、その月の第一週で何をやりたいかみんなで決めて、第二~四週でそれを実施するという流れ。ファシリも当番制。雑談、LT、輪読など、HIGの発表をしたこともあったとか。 HOKAとしては、 「しっかり運営されている」「悩む必要ある?」 という印象でした。 これ、KINTOテクノロジーズの従業員あるあるです。 この相談会をきっかけに、事務局の3人は「ぜひ勉強会、見学してみてください♪」とお招きいただきました。 突撃!となりの勉強会 自己紹介と雑談会 ということで、私たちのやりたい「突撃!となりの勉強会」してみました。 時は2024年3月12日。オンライン+MTGにiOSチームが集まって勉強会は始まりました。 この日は新しく参加されたメンバーがいるので、みんなの自己紹介を兼ねた「雑談会」がテーマ。最初に自己紹介を18人×1分=トータル20分くらいで実施。 名前と担当プロダクトと近況を報告。1分ですがSlack実況中継も活用して、初参加の私たちもメンバーのお人柄が理解できる効率よい自己紹介タイムでした。 そして、ちゃっかり道の駅事務局の私たちも自己紹介させていただきました。 後半は雑談がスタート。 「昨日、室町に来ていた粟田さんからSlackからデプロイしているが限界が来ている。サインインしないで連動したモバイルアプリできれば」という話が合ったとのこと。 すると、メンバーの一人が 「業務外で作ってみませんか?モバイルアプリ開発グループは、プロデューサーもバックエンドもいるし、興味がある人はぜひSlackチャンネル作ったので、議論していきましょ。」 と提案。 おお~! そこに、アシスタントマネージャーの日野森さんが 「学びの道の駅アプリを作っていくのも良いんじゃない?社内向けのアプリ作っていくの、良いよね。NFT入れてKTCトークンとか。」 矢島さん 「勉強会に参加するとポイント溜まるとか?」 日野森さん 「年末にポイント溜まった人はなんかもらえるとか?そんな風に面白そうだけど、外部に出すのはまだっていう案件をやって行くと良さそう。」 仲野さん 「社内の者が社内の開発していくと良いのかも!」 まさかの学びの道の駅にプラスの流れ!!!嬉しいです。 「ソースコード書く以外の学びがありそうだよね。」と雑談の中から、エンジニアとして成長するヒントとなるコメントが飛び交っていました。 やはりこの勉強会、すごく良いのでは? さらに雑談会は発展して、4月の勉強会は3月末に開催予定の「try! Swift Tokyo」のイベントについての話題で盛り上がりました。 翌週までの宿題を持ち帰り、iOSエンジニアは自分の道に帰って行くのでした。
アバター
Introduction Hello, I am Takaba, a Product Manager in the Global Development Group at KINTO Technologies. In this article, I will share my tips on effective communication while talking with various stakeholders as a Product Manager. Having worked on products for many years, I've seen how communication influences the atmosphere and success of a project. Here are some of the things I have experienced and still use and practice every day. The Pyramid Style As a Product Manager, I talk to people often. To ensure clarity so others can understand, I communicate using a certain method. That method is called the Pyramid Principle for Logical Speaking . There are three reasons to use this method. First, I feel like I needed some kind of method because I am not very good at speaking in public. For example, when I do a presentation or take part in a discussion, I use this method so that I don't fall into a loop where I worry about whether the listeners understand what I am saying, then become worse at conveying what I want to. Secondly, the job of a Product Manager involves talking to various people, and they will inevitably give a lot of different opinions, and it will be difficult to organize them. However, I find that using this method helps things run more smoothly. For example, a Product Manager talks with many stakeholders about products in an organized manner, but stakeholders have different opinions, and sometimes is hard to discern among the many options available to one. By using this method at times like that, I can solve it relatively smoothly. Third, you have to speak logically in order to accurately communicate information to others. Speaking logically lets the listener understand better. For example, when I explain something to someone, speaking logically usually makes it easier for the listener to understand. By using a logical approach to communicate the things you want to say, you can convey it in a friendly way that is easier for others to understand. The way you communicate things is very important because doing it in a friendly approach can affect the atmosphere and success of a project. What I've just explained was structured with the logical speaking approach of the pyramid principle in mind. This method is described in the book "Speak in One Minute" [^1] by Yoichi Ito, who taught me in person during a seminar conducted at an IT company where I used to work. I used part of the above to explain the pyramid-style logical way of speaking, which I needed to use when I first became a Product Manager. I will now explain the pyramid-style speaking method. As you can see at the top of this pyramid, the first thing you need to start is with the conclusion. That means saying first what you want to convey the most. The next step is the reasoning. State the reasons supporting the conclusion. Relying on only one reason is weak. You should aim for at least three. The third step is giving examples. The more specific examples you provide, the more likely you are to convince the listener. This part fleshes out your conclusion and aids in their understanding. Be specific and make it easy to imagine. I will use an example to explain. For this example, the conclusion is, "There should be a regular product meeting once a week." It looks like this with the pyramid style. Pyramid-style logical speaking is basic, and there are lots of times when people should use it in business, but how many people actually use it in their day-to-day work? I don't think that many people do. I think a lot of people assume that just because they understand a concept, the person listening can also understand it, so they tend to cut sentences short, skipping over a lot of key words. Like myself, there are many who cut conversations short, because discussing things in depth can be bothersome. Training is necessary, because speaking logically requires skills and practice. If you use it every day, you will do it more accurately, and you will communicate in a way that is easy for listeners to understand. Applying the Pyramid Principle also Involves Hypothetical Thinking Many logical thinking textbooks may say the opposite. Starting with a conclusion and then reasoning backwards as in the pyramid style can lead to a form of 'self-centered logic’. However, the author says, "In today's fast-paced world, prioritizing speed, even if it means adopting a somewhat self-centered approach, is acceptable if it helps to formulate thoughts quickly." Even if your explanations aren’t fully complete, engaging stakeholders with your reasoning will put you closer in the process of reaching a conclusion. For example, in a project, discussing with the entire project team at an early stage, polishing ideas and ensuring that everyone is on the same page will speed up the success of the project. When sharing and discussing, every team member will bring different ideas to the table. I think that by putting together these many opinions, we can come to a better objective conclusion for everyone. I feel that this kind of hypothetical thinking (thinking from the conclusion above) is efficient, speed-oriented, and enabling you to think objectively is a great skill to have. Using the Pyramid to Improve Listening Skills I have talked about the pyramid style from the speaker's perspective, but this method can also be used to train active listening skills too. The other day, a colleague at work suggested that I should improve my ability to understand what is being said. That was when the idea struck me to use the pyramid method for understanding. I create a box in my head corresponding to the "conclusion + reasoning" of the pyramid, and as I listen, I segment the information within this box. When I listen to the story while segmenting, it becomes easier to understand what is the core of the story and what is missing there. Source: Yoichi Ito, "Speak in One Minute", SB Creative, 2018 [^1] This is not something that can be mastered immediately through training. We can improve listening and understanding skills by practicing this method in daily conversations as well. Therefore, I incorporate this method into both speaking and listening training every day. Conclusion Today, I talked about a communication method that I use in my day-to-day work. Are there any methods that you use consciously every day? If you are interested in the pyramid method of speaking, I would recommend you to give it a try. I believe that communication skills are very important to become a better Product Manager, so I will work hard with everyone in the future. References [^1]: Yoichi Ito (Author), "Speak in One Minute", SB Creative, 2018
アバター
Hello (or good evening), this is the part 6 of our irregular Svelte series. To read previous articles, you can click on the links below: Insights from using SvelteKit + Svelte for a year Comparison of Svelte and other JS frameworks - Irregular Svelte series-01 Svelte unit test - Irregular Svelte series 02 Using Storybook with Svelte - Irregular Svelte Series 03 Exploring Svelte in Astro - Irregular Svelte series 04 SvelteTips - Irregular Svelte series 05 In this article, I will be writing about SvelteKit SSR deployments. You can get the module here. @sveltejs/adapter-node Deployment of SSR requires an adapter. This time, I will use the Node adapter. https://www.npmjs.com/package/@sveltejs/adapter-node This adapter is also listed on the official GitHub. https://github.com/sveltejs/kit Express This is a web framework for Node.js. Others include Fastify, but you are free to use any of them. https://www.npmjs.com/package/express Environment Settings First, configure the settings in Svelte. SvelteKit is SSR by default, so there is no need to set anything special there. On the other hand, you need to use an adapter to build when deploying. As described on the official site , from the Svelte project, install with yarn add -D @sveltejs/adapter-node and add the following code to svelte.config.js . import adapter from '@sveltejs/adapter-node'; const config = { kit: { adapter: adapter() } }; export default config; After building your project with yarn build , the resulting files will be placed in the default output location, /build , and the files, index.js and handler.js will be created. If you want to use the server as is with the built file, you can execute node build to run build/index.js to start the server and check it works. (The default is build because xxxx in node xxxx is the output location for the built files.) Next, put the Express configuration file in the root directory. (Please install express in advance.) import { handler } from './build/handler.js'; import express from 'express'; const app = express(); // For example, create a health check pass for AWS that is not related to the SvelteKit app created app.get('/health', (_, res) => { var param = { value: 'success' }; res.header('Content-Type', 'application/json; charset=utf-8'); res.send(param); }); // Svelte created with build is handled here app.use(handler); app.listen(3000, () => { console.log('listening on port 3000'); }); After completing the above settings, you can start the Express server with node server.js and check the SvelteKit app at http://localhost:3000 . Deploy the App to AWS From here on, we will deploy it to AWS. In AWS, deployments can be configured in various ways depending on the requirements. In this article, I will show you how to access the app from the Internet using only EC2. For security and performance reasons, please consider combinations such as CloudFront, ALB, and VPC in practice. As the AWS services incur charges, it is advisable to monitor costs and stop the unused service. EC2 This is a cloud server service to host the SvelteKit app I have created. https://aws.amazon.com/jp/ec2/ Create an EC2 Instance First, I would like to start from setting up EC2. To create an EC2 instance, go to the EC2 Dashboard and click "Launch Instance" in the upper right corner. Then you will be redirected to the screen shown above. Configure the following items and click "Launch Instance." Name: Choose a name to identify your instance easily. OS images: You can adjust according to your preferences, but for this article, I will be using Amazon Linux, with subsequent commands based on the same. Instant type: t2.micro (otherwise, charges apply) Key pair: In this article, I will access EC2 with an SSH client, so set it up. Network settings: I will enable HTTP access to allow connection via an SSH client and for basic web accessibility checks. Connect to the EC2 Instance After creating an EC2, you will be returned to the list screen, where you should see the newly created instance. Next, proceed to connect to the instance and configure any remaining settings. Choose Instances from the list screen and click the "Connect" button. Then, you can choose from four connection methods, but this time, I will use SSH. I assume that you have already created and downloaded a key pair on the instance launch screen earlier. Use that key as instructed on the screen to connect. Once the connection is made successfully, install various things. Setting Up Node.js I am going to install Node.js first. You can install different Node.js versions using the Node Version Manager (NVM), which offers convenient switching between them. https://github.com/nvm-sh/nvm curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash After installation, you will get a message from the terminal. You need to pass the nvm command so that it can be executed. Copy the following code, paste it into the command line, and execute it. export NVM_DIR="$HOME/.nvm" [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" // Check if the nvm command works nvm --version -> 0.34.0 And install Node.js with nvm // Install Node.js 18 nvm install 18 // Check the installed node and npm version node -v -> 18.16.1 npm -v -> 9.5.1 // Install yarn (skip this if you want to use npm) npm install -g yarn yarn -v -> 1.22.19 The Node.js setup is now complete. SvelteKit App Placement Next, I would like to put the app I created in an instance. There is also a way to move it from your local to EC2, but this time, I will clone it from the repository on GitHub. First, install GitHub CLI so that it can be cloned. Installation instructions for Linux can also be found in official documentation . // commands listed in the official documentation (Be sure to check the official documentation as it may change.) type -p yum-config-manager >/dev/null || sudo yum install yum-utils sudo yum-config-manager --add-repo https://cli.github.com/packages/rpm/gh-cli.repo sudo yum install gh // Version check gh --version -> 2.31.0 Next, log in with your account and clone the repository. // Login to GitHub gh auth login // Put the url of the repository you want to clone gh repo clone https://github.com/xxxxxx/yyyyyyy Now the app has been successfully cloned to the instance. Setting Up Nginx The next step is to install the Nginx server and modify the config file. // Install sudo yum install nginx // Go to the nginx folder cd /etc/nginx // Open the nginx config file with vim sudo vim nginx.conf In the config file, there is a section called server . Set the proxy path as follows. This syntax instructs nginx to access the SvelteKit server launched in EC2 when accessing / . server { location / { proxy_pass http://localhost:3000; } } Launch the Node Server and Access it from the Web Finally, build and start the node server, just as you did for the local check. yarn install yarn build node server.js Then, try accessing it from the DNS name provided by the EC2 instance you created. (Information can be found on the EC2 listing page.) You should now see something like this! However, if you stop the connection with the EC2 instance, the Node server will also be stopped. So, we use a library called pm2 to persist the Node server. https://pm2.io/docs/runtime/guide/installation/ yarn global add pm2 pm2 -v -> 5.3.0 pm2 start server.js // Check the status of the node server currently running with pm2 and the server id you want to stop. pm2 status // Stop the node server currently running with pm2 pm2 stop [id] Now, even if you disconnect from EC2, you can still browse from the web! This is all on how to deploy a SvelteKit SSR app to AWS.
アバター
はじめに こんにちは、11月入社のnamです! 本記事では2024年2,3月入社のみなさまに、入社直後の感想をお伺いし、まとめてみました。 KINTOテクノロジーズに興味のある方、そして、今回参加下さったメンバーへの振り返りとして有益なコンテンツになればいいなと思います! J.O ![alt text](/assets/blog/authors/nam/newcomers/icon-jo.jpg =250x) 自己紹介 3月入社のJ.Oです。KINTO ONE開発部 新車サブスク開発Gでプロデューサー職として所属しております。前職では事業会社でtoC向け自社サービスのWeb/アプリの企画や運営、ビジネスサイドの開発要件定義まとめなどをしておりました。 所属チームの体制は? 新車サブスクGとしてはバックエンド、フロントエンド、コンテンツ開発、ツール開発など様々なチームがあり、協力会社さん合わせて40名以上の、社内でも最大規模の部署です。 KTCへ入社したときの第一印象?ギャップはあった? グループ会社との関係性や立ち位置、モビリティ業界で新しいプラットフォームを提供するために求められている役割など、入社前に想像していた以上にKTCへの期待度が高いと感じました。 現場の雰囲気はどんな感じ? エンジニア中心の会社なので大人しい雰囲気かと思いますが、結構和気あいあいとしてます。Slackのチャットや絵文字なども賑やかです。自動車に関連する会社なので、デスクの上に車の模型などを飾っている方も多いです。 ブログを書くことになってどう思った? 前職でもサイト内のコンテンツ作成をする機会はありましたが、自分自身のことを発信するのは初めてなので緊張してます。 S.Aさんからの質問 「KTCに入社して驚いたことや感動したことを教えて下さい。」 勉強会などのイベントの多さです。2週に1回以上は何かしらのイベントが行われていて、新しい情報を受信/発信する姿勢に驚きました。 nam ![alt text](/assets/blog/authors/nam/newcomers/icon-nam.JPG =250x) 自己紹介 2月にKTCに入社しました。namです。前職は制作会社でフロントエンドエンジニアをしていました。 所属チームの体制は? 小規模なチームで、皆さんそれぞれ自分の担当が明確に分かれている印象でした。 KTCへ入社したときの第一印象?ギャップはあった? オリエンがとても手厚かったです。全員で同じ方向を向いて進むぞ、という強いメッセージを感じました。 現場の雰囲気はどんな感じ? 隣近所に同じ案件のメンバーが固まっているので、相談しつつ作業しつつ、自由に働いている印象です。 大きなオフィスで働くのが初めてだったので、「広い空間でキーボードの音だけ響いている」みたいな想像をしていたのですが、そんなことはなく安心しました。 ブログを書くことになってどう思った? 入社前からテックブログを拝見していたので、ついに書く側に回って緊張しています。 J.Oさんからの質問 「フロントエンドエンジニアから見て、作りがすごい!と感じるWebサイトを教えてください。」 元々デザインを少しやっていたので、デザインと技術が調和しているサイトは本当にすごい、と感じます。「作りがすごい」サイトは、「作り方がすごい」のだろうと思っています。 企画段階からどれほどの話し合いをして、どうやってエンジニアとデザイナーがコミュニケーションとっていて、お互いの領域を理解し合っているのか、想像もできないようなサイトがたまにありますね。 そんなデザイン的にも技術的に優れていて、調和の取れたサイトを見ると「すごい、強い、最高!」って思います。 KunoTakaC ![alt text](/assets/blog/authors/nam/newcomers/icon-kuno-takac.jpg =250x) 自己紹介 KTC管理部の久野です。労務システム全般(SmartHR、Recoru、ラクロー、カオナビetc)担当です。前職は工場付きのSE、前々職は便利屋(一応、中小企業のインフラがメイン)やってました。 2023年に身体障碍4級(下肢麻痺)認定されてますが、特段気を付けてもらうことはないですね。たいてい杖を持っているので見分けやすくて良いと思いますが、杖を持っていないときに見分けがつかなくなるので、顔も覚えてくださいね! 所属チームの体制は? 管理部としては11人、そのうちでKTC管理部は2人です。さらに名古屋となると……1人!仲良くやっていますので安心してください。 KTCへ入社したときの第一印象?ギャップはあった? IT企業ということで、管理部といえどなにかしらのシステム経由で話をされるのかなと思っていましたが、意外と会議室を使って面着していたのは少し驚きました。 現場の雰囲気はどんな感じ? 基本静かですが、コミュニケーションをとりやすい雰囲気はありますし、ちょくちょくお話しします。管理部は自由席なので、話しかけたい人の近くに席を決めることができて便利です。 ブログを書くことになってどう思った? slackの#腰ケアチャンネル を広めるのに役立ちそうかな、と感じました。労務システム以外の仕事もあるので、ちょっとだけ大変ですが。 namさんからの質問 「入社して3ヶ月ですが、前のお仕事と違う点や、KTCならではの気づきとかありましたか?」 一言で言うと静かですね。前職ではジェット機のようなサーバー室の空調音、地震と見紛う工作機の振動、ドラムの如きドットインパクトプリンタ/電子プリンタの駆動音と三色パトライトの警告音、アクセントにSystemWalkerと電話の音で、毎日がライブハウスでした。 前職はオンプレ環境しかなく、SaaSに初めて触れました。作業によってオンプレがいいな~と思ったり、SaaSいいね!と思ったり、一長一短であることに気づいたというのはあります。 M ![alt text](/assets/blog/authors/nam/newcomers/icon-m.jpg =250x) 自己紹介 前職ではなかなか経験するのが難しかった自社プロダクト開発に挑戦したくて、新たな環境に飛び込みました。 所属チームの体制は? 販売店で行われているクルマ提案業務の効率化・高度化を支えるプロダクトを開発しているチームです。テックリード、フロントエンドエンジニア、バックエンドエンジニアが所属しています。 KTCへ入社したときの第一印象?ギャップはあった? 入社前から「大人なスタートアップ」という印象をもっていたので、初日から自律・自走を求められるかと思っていましたが、ハンズオンから社長との対話会まで、オンボーディングが丁寧で時間をかけていたのが少し意外でした。おかげで、はじめて触れるドメイン知識や幹部の人となりを早く知ることができました。 現場の雰囲気はどんな感じ? 私の所属する開発チームでは、複数のプロダクト開発が並走しているので、お互いのやっていることがわかるように、毎日の朝会などで各メンバーがどの開発のどんなタスクに注力しているか情報共有したり、出社時には気軽に話しかけたりするなど、コミュニケーションをよくとっていると思います。 ブログを書くことになってどう思った? これまでブログなどで情報発信する機会はなかったので、新鮮な気持ちです。 KunoTakaCさんからの質問 「一番気に入っている整頓グッズは何ですか?実用性の高いものをお願いします!」 机の上や床に散らばりがちな、スマホやPCの充電ケーブルの整理整頓にお困りの方におすすめなのが、「 cheero CLIP 万能クリップ 」です!マグネットがついていてつけ外しが簡単なので、散らばっているケーブルを見つけたら、すぐ縛るのが吉です。しかも、ハリガネのように自由に変形させて形をキープできるので、スマホを立てかけて動画をながら見、なんてことにも使えますよ! R.S ![alt text](/assets/blog/authors/nam/newcomers/icon-rs.jpg =250x) 自己紹介 KINTO ONE開発部新車サブスク開発GのR.Sです。KINTO ONEのフロントエンドを担当しています。 所属チームの体制は? 6人体制です。 KTCへ入社したときの第一印象?ギャップはあった? 働き方の自由度が高く、共働きで育児しているとフルフレックスの働き方にとても助けられています。 現場の雰囲気はどんな感じ? 週一のプランニングで個人のやるべきことを明確にして粛々と作業を進めてますね。 ブログを書くことになってどう思った? こんなに早く書くことになるとは思わなかったですが、1度執筆したことで自社ブログを意識するようになりましたね。 Mさんからの質問 「新しいことに挑戦する時には、どのようにキャッチアップされていますか?ラーニングのコツを教えてください!」 気になったら一歩踏み出してみてます。性格的にも広く浅くなところがあるのでとりあえずチャレンジですかね?w 昔経験した全く異なることが「点と点が繋がって線になる」的なこともあるので、その瞬間がとても好きですね。 Hanawa ![alt text](/assets/blog/authors/nam/newcomers/icon-hanawa.jpg =250x) 自己紹介 KINTO ONE開発部新車サブスク開発G所属 フロントエンドエンジニアのHanawaです。前職でもフロントエンドをメインにエンジニアとして働いておりました。今まで培った知識や経験を業務に活かしつつ、領域を問わず技術力を高めていきたいです。 所属チームの体制は? 6人体制のフロントエンドチームです。 KTCへ入社したときの第一印象?ギャップはあった? 福利厚生の厚さがすごいです。 現場の雰囲気はどんな感じ? 皆さん技術的なキャッチアップの感度が高いですし発信力もあるので刺激になります。 提案しやすい環境だと思います。実際にエンジニアのアイデアからサービスが生まれた事例もあるようで、会社全体的にそのような雰囲気が醸成されている印象です。 ブログを書くことになってどう思った? これまであまり発信をしてこなかったので、とても良い機会だと思いました。今回の入社エントリーに限らず、何かテックに関する話題を記事にしてみたいです。 R.Sさんからの質問 「入社してみて前職から大きく変わったところはありますか?」 前職と比べ、かなり大きなエンジニア組織と感じました(前職ではエンジニアの社員は5名)。正直なところ、誰がどんなプロダクトに携わっていて何をしているのか、全部は理解しきれていません。勉強会をはじめ様々なイベントが部署を跨いで定期的に開催されているので、それらに参加して理解を深めていければと思います。 Taro ![alt text](/assets/blog/authors/nam/newcomers/icon-taro.jpg =250x) 自己紹介 KTC クリエイティブ室にジョインしましたTaroです。 所属チームの体制は? ディレクター・デザイナーの9名体制です。 KTCへ入社したときの第一印象?ギャップはあった? 入社時のオリエン内容で「社員のベクトルを合わせ邁進するぞ」という組織(One Team)感を感じました。 現場の雰囲気はどんな感じ? チームメンバーは明るく親切、クリエイティブに対する意識の高い方々です。 コミュニケーションが活発なので、業務を進めていく上で常に意見やアイディアを交わし合える、刺激ある環境です。 ブログを書くことになってどう思った? 「Tech Blogの過去ログで読んだアレかー」と思いました。 Hanawaさんからの質問 「普段の業務で最もこだわっていることがあれば教えてください」 「課題・ニーズ・価値」における「現状とゴール」です。 S.A ![alt text](/assets/blog/authors/nam/newcomers/icon-sa.jpg =250x) 自己紹介 データ分析部にジョインしたS.Aです。 所属チームの体制は? リーダーと自分を含めて総勢9名。 KTCへ入社したときの第一印象?ギャップはあった? 良い感じにゆるいところに感動しました。 現場の雰囲気はどんな感じ? それぞれ得意分野を持っていると感じており、刺激を得られる現場と感じ取ります。 ブログを書くことになってどう思った? ブログを書くことは初めての経験なので緊張しましたが、良い取り組みだと思いました。 Taroさんからの質問 「入社から1カ月経過しましたが、仕事面で意識するようになったことはありますか?」 スピード感が早いので置いていかれないようにしています。 さいごに みなさま、入社後の感想を教えてくださり、ありがとうございました! KINTO テクノロジーズでは日々、新たなメンバーが増えています! 今後もいろんな部署のいろんな方々の入社エントリが増えていきますので、楽しみにしていただけましたら幸いです。 そして、KINTO テクノロジーズでは、まだまださまざまな部署・職種で一緒に働ける仲間を募集しています! 詳しくは こちら からご確認ください!
アバター
Svelte Tips Hello (or good evening), this is part 5 of our irregular Svelte series. Click here to see the other articles: Insights from using SvelteKit + Svelte for a year Comparison of Svelte and other JS frameworks - Irregular Svelte series-01 Svelte unit test - Irregular Svelte series 02 Using Storybook with Svelte - Irregular Svelte Series 03 Exploring Svelte in Astro - Irregular Svelte series 04 SvelteTips - Irregular Svelte series 05 That's a lot of articles so far! This time, I will use the project we have in the previous articles and explain parts where you may go be like, "I'm stuck!" "what to do?" in a way that is easy to understand. The table of contents looks as follows: SSG Settings Differences Between .page.ts and .server.ts meta and How to Use It (About the Plugin) What is Used in Each Life Cycle SSG Settings With SvelteKit, you can easily configure deployment destinations by using a module called Adapter. The default is an SSR adapter called adapter-auto , so you need to install a module called adapter-static. I remember being stuck at first and wracking my brain. It was named "auto," so it must do something. But that was not the case. By just implementing adapter-static and writing the code in the documentation, I quickly created a build file optimized for static hosting (note to self to read documentations properly...) The official site of Svelte has a Japanese translation of PJ, so having translated documentation available was very helpful :) // without this, you can't build as an SSG import adapter from '@sveltejs/adapter-static'; /** @type {import('@sveltejs/kit').Config} */ const config = { // omitted kit: { adapter: adapter({ pages: 'build', assets: 'build', fallback: null, precompress: false, strict: true }) } }; export default config; Details: https://kit.svelte.jp/docs/adapter-static Differences Between .page.ts and .server.ts This was a setback when SvelteKit v1 was released and it changed significantly. It was a radical change, so you might remember it. Since the release of v1, SvelteKit inserts the following two files by default when fetching data in a page. *.svelte => Files such as UI *.page.server.ts || *.page.ts => a file that defines data such as fetch The files that define data are divided into page.ts and page.server.ts. I didn't understand the difference between *.page.ts and *.page.server.ts at first, so I opted for SSG. However, during transitions, it began fetching data from the API .... Like whaat?! For *.page.ts, it runs on both client-side and server-side For *.page.server.ts, it runs only on the server-side So, if you want to JAMSTACK with SSG, *page.server.ts is the right way to go. https://kit.svelte.jp/docs/load#universal-vs-server So again, please read the documentation! The documentation is great. Correct example of running only on the server side export async function load({ params, fetch }) { const pagesReq = await fetch(`APIURL`); let data = await pagesReq.json(); return { data }; } How to Manage meta Managing meta information poses a common challenge for all frameworks and websites. Before discovering the framework, I used to laboriously work with the trifecta Pug, JSON, and Gulp or Webpack, but with Svelte it became easier to deal with them. <script lang="ts"> import { siteTitle, siteDescription } from '$lib/config'; interface META { title: string; description?: string; } export let metadata: META; </script> <title>{`${metadata.title}|${siteTitle}`}</title> {#if metadata.description} <meta name="description" content={metadata.description} /> {:else} <meta name="description" content={siteDescription} /> {/if} <script lang="ts"> import Meta from '$lib/components/Meta.svelte'; let metadata = { title: 'title, title, title', description: 'description, description, description, description' }; </script> <Meta {metadata} /> You can create and load a meta component like this. You don't have to make it yourself as there are wonderful plugins such as this one out there. https://github.com/oekazuma/svelte-meta-tags Thank you kind stranger!!!! On the Usefulness at Each Life Cycle Finally, the unavoidable life cycle functions Svelte has five life cycle functions: onMount, onDestroy, beforeUpdate, afterUpdate, and tick . onMount As the name implies, this is executed after the component is initially rendered to the DOM. Timing is almost the same as mounted hook in Vue. onDestroy As the name implies, this is also executed when the component is destroyed. You can prevent memory leaks by discarding components when processing is no longer necessary. Also, for server-side components, only this lifecycle function is available. beforeUpdate This component is a lifecycle function used before the DOM is rendered. Also, beforeUpdate is used often when you want to reflect state changes first. Since this life cycle function is used before the DOM is rendered, you need to be careful when writing processing related to the DOM. afterUpdate This function is executed after the component is rendered by the DOM and the data is reflected. It's the last life cycle function to be found on Svelte. tick You can handle the timing after a state is updated and before the state is rendered into the DOM. It is possible to wait for the DOM to be updated before processing anything. It is relatively easy to understand because it has fewer life cycle functions than other frameworks. This would be all for now on my Svelte Tips article today. Conclusion I wrote a special article about Svelte titled "Getting Started with Svelte" in the July 2023 issue of Software Design, a Japanese Magazine. Please feel free to give it a read if you're interested :) (It also includes a tutorial to JAMSTACK in SSG, so give it a try!) https://twitter.com/gihyosd/status/1669533941483864072?s=20
アバター
はじめに こんにちは、KINTO テクノロジーズ ( 以下、KTC ) の SCoE グループの多田です。SCoE は、Security Center of Excellence の略語で、まだ馴染みのない言葉かもしれません。KTC では、この 4 月に CCoE チームを SCoE グループとして再編しました。本ブログでは、その経緯や SCoE グループのミッションなどをご紹介したいと思います。CCoE チームの活動については、 過去のブログ で紹介していますので、興味がありましたらご覧ください。 背景と課題 SCoE グループの設立経緯を説明するため、その前身である CCoE チームについて説明します。CCoE チームは 2022 年 9 月に設立されました。私が、KTC に入社したのが 2022 年 7 月なので、入社直後に設立されたことになります。 設立時に、CCoE の活動内容として掲げたのは以下の 2 つです。 クラウドの「活用」 共通サービスやテンプレート、ナレッジ共有や人材育成を通じて効率的な開発が継続できる クラウドの「統制」 適切なポリシーで統制されたクラウドを自由に使うことができ、常にセキュアな状態を維持できる クラウドの「活用」と「統制」の両面で様々な活動を行いました。ただし、「活用」については、CCoE チーム発足前から同じグループ内の各チームが中心的な役割を果たしていたこともあり、CCoE チームの活動の中心は「統制」が主でした。「統制」に関しては、 過去のブログ で紹介した通り、主に以下の活動を実施しました。 クラウドセキュリティの標準化ガイドラインの作成 セキュリティプリセットクラウド環境の提供 クラウドセキュリティ モニタリング・改善活動 特に「クラウドセキュリティのモニタリングと改善活動」については、プロダクト側が利用・設定するクラウド環境の態勢に不備がある場合、リスクのある設定や操作を確認し、問題があればプロダクト側に改善を依頼・支援するものでした。しかし、プロダクト側の組織ごとにセキュリティに対する考え方や浸透度が異なり、優先度が低く改善が進まないケースもありました。 一方、KTC 全体を見渡すと、「セキュリティ」をカバーする組織が領域ごとに複数存在していました。バックオフィスとプロダクト環境のセキュリティをカバーする組織に加え、CCoE がカバーするクラウドセキュリティで 3 組織がバラバラに存在していました。SOC 業務もそれぞれの組織で実施されており、全社的なセキュリティ対策の合意形成に時間を要したり、プロダクト側から見るとセキュリティ相談窓口がわかりにくくなっていました。全社的にはプロダクト環境のセキュリティをカバーする「セキュリティグループ」が中心的役割を果たしていました。CCoE チームはこのセキュリティグループとプロダクト側の橋渡し的存在となり、「クラウドセキュリティのモニタリングと改善活動」を実施していました。 SCoE グループの設立 SCoE グループの設立は、上述の背景を基に、以下の課題を解決するため設立しました。 クラウドセキュリティ改善活動の浸透 KTC 組織全体のセキュリティ関連組織の統合 「KTC 組織全体のセキュリティ関連組織の統合」に関しては、3 つの組織を 1 つの部門 ( IT/IS 部 ) に統合することで、より効率的かつ迅速な活動が可能となりました。 「クラウドセキュリティ改善活動の浸透」に関しては、IT/IS 部門というセキュリティを含む部署に組織されたことで、全社的なセキュリティに対する取り組みが強化されました。これまでの CCoE の活動は、プラットフォームグループの 1 つのチームとして行われていましたが、組織名に「セキュリティ」が含まれる部署になったことで、セキュリティへのコミットメントが高まりました。また、Cloud CoE から Security CoE への変更は、クラウドセキュリティに特化した組織としてのメッセージ性を高めるとともに、組織のセキュリティ機能を強化することを意味しています。特に、セキュリティグループと同じ部署となったことは、より迅速にセキュリティ改善活動が実施できると思います。 CCoE が1年半で消滅することには、心残りもありましたが、CCoE は元々「統制」を主たる活動内容としていたため、この変化を受け入れることにしました。組織自体はなくなりましたが、CCoE の活動は引き続き全社的なバーチャルな組織として行われています。 SCoE グループのミッション SCoE グループが設立されたことで、ミッションは次のように定義しました。 ガードレール監視と改善活動をリアルタイムで実施する ここで言う、ガードレールは、単に予防的/発見的ガードレールの意味だけでなく、セキュリティリスクを発生するような設定や攻撃などを意味しています。 昨今のクラウドセキュリティを取り巻く状況を見ていると、クラウドの態勢が原因でセキュリティインシデントが発生するケースが多いですし、態勢不備から実際のインシデントまでの時間が急速に短くなっているのが事実だと思います。そのため、セキュリティリスクが発生した場合に、如何に速やかに対応できるか、対応できるような事前準備をどれだけ前もってできるかが、SCoE のミッションだと考えています。 SCoE グループの具体的活動 具体的活動は、ミッション実現のため、以下の考えかたで進めています。 セキュリティリスクを発生させない セキュリティリスクを常に監視・分析する セキュリティリスクが発生したときに速やかに対応する 「セキュリティリスクを発生させない」では、CCoE から継続して、「クラウドセキュリティの標準化ガイドラインの作成」や「セキュリティプリセットクラウド環境の提供」を実施しています。これまでは、AWS が中心でしたが、Google Cloud や Azure についても対応を進めています。また、社内への浸透のため、随時勉強会なども実施しています。 「セキュリティリスクを常に監視・分析する」では、これまでは、CSPM ( Cloud Security Posture Management ) や SOC を対象に実施していましたが、CWPP ( Cloud Workload Protection Platform ) や CIEM ( Cloud Infrastructure Entitlement Management ) にも活動を広げ始めています。SOC については、元々、3 組織でバラバラに実施していたものを 1 つに統合する活動も開始しています。 「セキュリティリスクが発生したときに速やかに対応する」では、設定の自動化やスクリプト化、生成系 AI の活用も視野に入れて検討を始めています。今後、クラウドセキュリティの分野では、生成系AI の活用なくしてセキュア環境を維持することは難しいと考えており、その活用を検討しています。 まとめ KINTO テクノロジーズでは、CCoE チームを SCoE グループとして再編しました。CCoE で実施していたクラウドの「統制」の活動を、よりクラウドセキュリティに特化した組織として実施するために設立しました。 今後、SCoE グループは、クラウドセキュリティの進化をリードする重要な役割を果たしていきます。クラウドの進化と共に、より複雑となるクラウドセキュリティの分野において、セキュリティリスクを最小限に抑え、安全かつ信頼性の高いサービスを提供できるよう、その下支えとなれればと考えています。 最後まで、読んでいただきありがとうございました。 さいごに SCoE グループでは、一緒に働いてくれる仲間を募集しています。クラウドセキュリティの実務経験がある方も、経験はないけれど興味がある方も大歓迎です。お気軽にお問い合わせください。 詳しくは、 こちらをご確認ください
アバター
I tried using Svelte in Astro Hello (or good evening), this is part 4 of our irregular Svelte series. You can find here our previous articles in the series: Insights from using SvelteKit + Svelte for a year *SvelteKit major release supported Comparison of Svelte and other JS frameworks - Irregular Svelte series-01 Svelte unit test - Irregular Svelte series 02 Using Storybook with Svelte - Irregular Svelte series 03 This time, I tried using Svelte in Astro. In this article, although this is a Svelte series, I am going to change my tune a bit. Have you ever heard of Astro, a framework that's currently gaining popularity? Astro is a framework for building websites without relying on client-side JavaScript by default. Astro allows JavaScript to be explicitly specified and loaded in components, rather than being loaded by default. In Astro terms, this concept is commonly referred to as ‘Islands’. Also, as officially stated, Astro supports a variety of popular frameworks! https://astro.build/ Let's take a look at the next features: Zero JavaScript Multi-Page Application (MPA) Various UI frameworks can be integrated into Astro. In this article, I will show you how to use Svelte in Astro. I would like to try various things such as props and bindings. Setting up the Environment Import a Svelte component to Astro Try props with Astro and Svelte Astro and Svlete bindings Setting up the Environment Install Astro and Svelte yarn create astro astro-svelte Install Astro in the astro-svelte directory using Astro's CLI. Now we are ready to run Astro, but we can't use Svelte with this alone. Next, install Svelte and Svelte modules for Astro so that Svelte can run on Astro. yarn add @astrojs/svelte svelte Now that we have a module for running Astro and Svelte, we will write in an Astro config file, astro.config.mjs, that we will be using Svelte. We are now ready to run Svelte on Astro. Thanks to the CLI, the process involves very few steps and is pretty easy. import { defineConfig } from 'astro/config'; // Add here import svelte from '@astrojs/svelte'; // https://astro.build/config export default defineConfig({  // Add here integrations: [svelte()], }); Now that we are ready, let's actually run Svelte on Astro. Import a Svelte component to Astro <script> let text = 'Svelte' </script> <p>{text}</p> First, we created a child Svelte component. This component inserts the string Svelte into the tag. Next, import the Svelte component into the parent component of Astro. --- import Sample from '../components/Sample.svelte' --- <Sample /> You see, it is very easy. I mean, it's amazing! Given that Astro is MPA, it would be possible to use it like Svelte for components, leaving only the routing to Astro. Try props with Astro and Svelte Export the value of the Svelte component above. <script> export let text = '' </script> <p>{text}</p> Insert a string in Astro. --- import Sample from '../components/Sample.svelte' --- <Sample text="Svelte" /> The same string Svelte is now displayed. So, on the other hand, can Props be done with Svelte as a parent? Let's try it. Define a child component of Astro... --- export interface Props { astrotext: '' } const {astrotext} = Astro.props --- <p>{astrotext}</p> Load with Svelte components! src/components/Sample.svelte <script> import Child from './Child.astro' export let text = '' </script> <p>{text}</p> <Child astrotext="Svlete" /> Failed. Apparently, the parent needs to be Astro. Then, what about both parents and children are Svelte? First, create a child component of Svelte. <script> export let svelteChild = '' </script> <p>{svelteChild}</p> Define it in the parent component of Svelte…! <script> import SvelteChild from "./SvelteChild.svelte"; export let text = '' </script> <p>{text}</p> <SvelteChild svelteChild="SvelteChild" /> It worked! It may be obvious, but Svelte to Svelte seems to work. Also, it seems that the files under page must be *.astro files. Failed cases src/pages/+page.svelte src/pages/index.svelte It became clear that to import files with different UI framework extensions, *.astro needs to be the parent. Run Svlete binding in Astro Finally, let's try binding. Binding in Svelte. <script> export let text = '' let name = ''; </script> <input bind:value={name}> <p>{name}</p> <p>{text}</p> The assumption is that the string part “name” will be bound. src/page/index.astro is unchanged, so let's take a look at the screen. Even if entered, it will not be reflected... In Astro, some client-side specific features (e.g. user input into input fields like this) do not work by default. If you want to use these features, binding is possible by using Astro's client:load directive for the imported component. --- import Sample from '../components/Sample.svelte' --- <Sample text="Svelte" client:load /> It worked fine. The client directive is not only limited to :load, so it might be interesting to try out various things. https://docs.astro.build/en/reference/directives-reference/#client-directives Summary Would it really work? I started with a doubt like this, but it is practical enough that it can be used in Astro and UI framework products. Astro appears to be particularly easy to use for corporate websites, and although not mentioned here, its functionalities regarding sitemaps are also robust. This is all on how I tried using Svelte in Astro. My next article will conclude the series on practical tips for Svelte (rules of thumb).
アバター
Hello (or good evening), welcome to the third installment in the intermittent Svelte series. Below are our previous articles in the series: Insights from using SvelteKit + Svelte for a year Comparison of Svelte and other JS frameworks - Irregular Svelte series 01 Svelte unit test - Irregular Svelte series 02 In this installment, we will talk about using Svelte and Storybook. About Storybook I think is known as a tool that simplifies management and operation of UI components, while also offering a range of other functionalities. https://storybook.js.org/ What We Will Do in This Article In this article, I will cover the following three points: Implementing Storybook in a real project Register components in Storybook Run tests on Storybook Let's get started! Implementing Storybook in a Real Project This time, I will integrate Storybook in an ongoing project instead of starting from scratch. The Project To Be Implemented URL https://noruwaaaaaaaay.kinto-jp.com/ This project was made using SvelteKit + microCMS + [S3 + Cloudfront] . They have interesting content, so I recommend you to visit their website! Recommended articles (in Japanese) https://noruwaaaaaaaay.kinto-jp.com/post/93m02vm8chf3/ https://noruwaaaaaaaay.kinto-jp.com/post/fe35u405761/ Deployment Steps npx storybook@latest init Run this command in the directory where the project is located. Doing this completes the initial build of Storybook in your project. A directory called .storybook and a directory under src called stories will be created. That is all for the initial build. Register components in Storybook Try Running Storybook Try launching Storybook. Run yarn storybook . You will see a screen like this. Since the components in src/stories/ and **.stories.ts are not used in the project, I will delete all of the files in stories, put in Button.stories.ts again, and register the components that are actually used for Noru-Way in Storybook. Try Registering Components in Storybook Here is the visual and code of a button that is an actual component in the project. <script lang="ts"> export let button: { to: string; text: string }; </script> <div class="button-item"> <a href={button.to} class="link-block" > <span class="link-block-text">{button.text}</span> </a> </div> Let's register the button component above to Storybook. import type { Meta, StoryObj } from '@storybook/svelte'; // Register the button component import Button from '$lib/components/Button.svelte' const meta: Meta<Button> = { title: 'Example/Button', component: Button, tags: ['autodocs'], }; export default meta; type Story = StoryObj<Button>; export const Primary: Story = { // Register the button component object export let args: { button: { to: '', text: '' } }, }; The screen will be updated to look like this. Let's try actually replacing text on the storybook screen. I was able to confirm that it actually changed. Albeit very minimal, that is all for the button component placement. Try Testing with Storybook I will try to test the actual stories file for the component I've added, making the process as simple as possible. Deployment Steps First, install the modules required for testing. yarn add --dev @storybook/test-runner Running Tests on Storybook Let’s test it. yarn test-storybook If you run the above and the test passes, the output will look like this. If the test fails, it will look something like this, depending on which part of the test fails I was able to test to see if storybook was working properly. There are many options available, so if you want to know more, please see below. https://storybook.js.org/docs/svelte/writing-tests/test-runner Conclusion As you saw, I was easily able to install Storybook, added stories to components, and tested Storybook to see if it works as intended. Although it was very hard to add a Storybook to an HTML-only project when I tried in the past, I found that it's actually pretty easy as shown in the demonstration this time. That made me realize that we live in good times. That concludes today’s article on using Svelte and Storybook. Next time, I will explore something different by integrating Svelte with Astro . Hope you look forward to the next one!
アバター
Introduction Hello, I am Suzuki and I joined the company in November! I interviewed those who joined the company in November 2023 about their first impressions of the company and summarized them in this article. I hope this content will be useful for those who are interested in KINTO Technologies, and serve as a reflection for the members who participated in the interview. Shirai Self-introduction I am Shirai from Platform Group, who joined the company in August. I work on designing and building AWS infrastructure. I thought this entry would be interesting, so I decided to participate in it! How is your team structured? We have two members at the Osaka Tech Lab, and five at the Jimbocho Office in Tokyo, making a total of seven. What was your first impression of KINTO Technologies when you joined? Were there any surprises? The change from a full remote environment to one where I primarily work onsite (1-2 days per week working from home) was a bit confusing. On the other hand, I now feel that it is easier to discuss things when I'm at work, which leaves me with a positive impression. I had the impression that everyone had strong technical skills. Maybe because I wasn't constantly involved with the infrastructure, at first I was too busy trying to grasp their discussions. What is the atmosphere like on site? It is very homely! Since I am basically at the office, I can immediately consult with other team members, which is super helpful. In addition, we have introduced Gather , so you can easily go to the person you want to consult with while working from home. I think the reason why we are able to discuss matters easily is that we feel at home and have a good relationship with each other to discuss things outside of work. How did you feel about writing a blog post? I take it as a challenge, as any other thing. I think KINTO Tech Blog has a lot of good articles, and this is a great stepping stone. Actually, I have already written an article on the Advent Calendar titled " Deployment Process in CloudFront Functions and Operational Kaizen ", so please have a look! Question from November newcomer to another Are there any clubs or communities within the company where people with similar interests can get together? If so, what clubs do you participate in? There are many! Tech Blog also introduced sports clubs! I am participating in the running circle (RUN TO), and E-sports Club, even though they haven't been featured yet. AKD Self-introduction I'm AKD from the Operation Process team, Corporate IT Group. I'm the sole member among the November newcomers to be at the Osaka Tech Lab. I work as a Corporate Engineer. It is commonly referred to as the information systems team. How is your team structured? Our team, consisting of four members, is responsible for on/off boarding, visualizing, and improving processes related to our PCs and various SaaS. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I initially thought that as an engineering company, there might be limited communication opportunities. However, I've found there are numerous opportunities for communication, including regular study sessions and meetings to review department meeting minutes, which was a good surprise. What is the atmosphere like on site? I feel that everyone is building a relationship of mutual respect without hesitation in a positive way. The Corporate IT Group comprises members stationed in Muromachi (Tokyo), Jimbocho (Tokyo), Nagoya, and Osaka, organized into five teams. Despite its size, we maintain a constant Zoom channel for communication, where conversations take place across locations and teams, and I believe this fosters a positive atmosphere. How did you feel about writing a blog post? I noticed a previous post in the company, but I thought it was written by only selected people, but I was surprised that everyone is writing! Also, it is simple, but I like it because it has a sense of a peer group. Question from November newcomer to another After a month at the Osaka Tech Lab, could you share your impressions of the atmosphere you've experienced? It is a highly inclusive environment with a welcoming atmosphere for anyone, from people who briefly come here on business trips to new hires. SSU Self-introduction This is SSU from KINTO ONE Development Group. As a web director, I am responsible for the web direction of DX projects on Toyota dealerships. How is your team structured? I am a member of the DX Planning team within the Owned Media & Incubation Development Group. Our team's mission is to provide a wide range of mobility solutions to customers by addressing the bottlenecks within Toyota dealerships through the power of IT. There are seven members in total: two producers, three directors, and two designers. What was your first impression of KINTO Technologies when you joined? Were there any surprises? My first impression is that there are more young people and more freedom compared to the stereotypical image of the automotive industry. What is the atmosphere like on site? It's only been a month since I joined the company, but I feel that the DX Planning team is filled with distinct personalities and everyone brings their own uniqueness to the table. I think our team's strength lies in these differences because they allow us to notice what might be overlooked when we work on a project together, whether through individual communication or in meetings. How did you feel about writing a blog post? I thought my first time blogging had finally arrived. Question from November newcomer to another What is your favorite emoji from the KINTO Technologies' Slack Workspace? I like the mushroom emoji running with a determined face. kiki Self-introduction I am Kiki from Human Resources Group. I participate in the hiring process as well as the Tech Blog operations project team. How is your team structured? The HR team is currently made up of 6 people (as of December 2023) including myself. We have members with diverse personalities, and everyone takes an interest in each other's work. Together, we work diligently and are actively involved in recruitment tasks on a daily basis. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I thought the organization was flatter and more open than I expected. Perhaps it's because of my role in HR, but I've noticed that discussions seldom hinge on who said something, but rather on a "what is the best course right now" perspective for moving the team forward. On my second week of joining the company, I participated in the information sharing meeting of the Osaka Tech Lab, and I have the impression that there are many warm people who welcomed me like friends right away! What is the atmosphere like on site? In order to create an easy-to-talk space, we often engage in small talk. It is an environment where you can stay tuned to what's happening within the organization and among its people. I was quite reserved for the first two weeks after joining the company, as I was still getting acquainted. However, given my prior experience in recruiting, I appreciate this atmosphere where I can easily discuss any questions I may have, such as "What’s happening here?" at any time. How did you feel about writing a blog post? Simply, "happy!". The team members of the Tech Blog project team have also been in touch since the first month of us joining the company. I haven't been active in external communication, so I worry a bit about potential problematic statements. However, I enjoy writing, and I see it as a valuable space for exploration. Question from November newcomer to another How do you relieve stress? I listen to rock music, a genre I don't usually listen to much. Franz Ferdinand and Yoru no Honki Dance are particularly enjoyable. Also, I came across an article suggesting that doing funny dances at home is good for relieving stress, so I've been trying to dance at home where others can't see me. (Highly recommended!) Y.Suzuki Self-introduction I am Suzuki from Project Promotion Group. I am in charge of front-end engineering at KINTO FACTORY. How is your team structured? Although there are some service providers and people working in other divisions, the team is made up of KINTO Technologies' members from management to implementation. Among them, the front-end team currently consists of six members, with another new member joining in December. What was your first impression of KINTO Technologies when you joined? Were there any surprises? Before joining the company, I thought it would be a more formal environment due to the higher average age and the nature of being a business corporation. But when I joined the company, there was plenty of flat communication and a lot of openness to new initiatives and things that seemed interesting. I found the environment to be filled with individuals who were older and held higher positions than myself who were more playful and inquisitive while utilizing their experience, and who successfully balanced being casual with maturity. Since my previous job was mainly work from home, I thought I would have a hard time commuting to work. However, I've found that I can easily adapt to the environment, and that I actually really enjoy the hybrid work style😳 What is the atmosphere like on site? There were many things I didn't understand at first, so I thought I had to build a relationship where we could easily talk to each other. This is why less than a week after I joined the company, I tried having "Nerunerunerune" (a candy you make) at my desk. I have been chatting and smiling with everyone, and recently, I have been eating Mandarin oranges together with my team members while talking about work. In about 2 weeks upon joining, when I expressed during one-on-one meetings and meals that I could do other things besides engineering, I was told , "There aren't many people who can do that, so I'll ask if I can make good use of it." I am currently looking to expand my work beyond front-end work to improve the product! If the timing is right, we have lunch together when we come to work, and I find that there are many opportunities to communicate beyond work. How did you feel about writing a blog post? I had the impression that blogs for engineers focus on technology, meaning that much of the content already exists, requiring extensive verification and making it challenging to write, even when deciding on a subject in the first place. But this time, it was a simple entry, so I thought I could provide some useful information to those interested in KINTO Technologies. Question from November newcomer to another department member What is the most enjoyable moment at your work? I find joy in getting inquiries, even about the simplest things. Although I have only been with the company for a short time, I am glad to know that there are aspects where colleagues can rely on me and that there are tasks I can contribute to. I am trying to absorb more from others that I can respect, in order to broaden the range of things I can do. T.F Self-introduction I am T.F from the Project Promotion Group. I am in charge of the back-end of KINTO ONE used car. How is your team structured? Front-end, Back-end, and BFF (backend for frontend) are handled by employees and subcontractors, respectively. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I was surprised that could get paid holidays as soon as I joined the company. I'm thinking of moving to a new place soon, so it would be helpful for that. What is the atmosphere like on site? There are many friendly people here. The atmosphere is conducive to asking questions and making suggestions. How did you feel about writing a blog post? It is a strange feeling to transition from being a reader before joining the company to being a writer. Question from November newcomer to another department member What were your duties during your first month in the company? I've just joined the company so I'm only doing simple tasks so far. I did smaller development, code reviews, and estimates for projects that are scheduled to begin on a full scale next year. I am working behind the scenes to incorporate domain-driven design, clean architecture, among other things. A.N Self-introduction I am A.N from the Common Service Development Group. I am a Product Manager for the membership platform underlying KINTO ID. How is your team structured? There are six members (including subcontractors) What was your first impression of KINTO Technologies when you joined? Were there any surprises? I caught a cold from the first day of joining the company, and I had to take off on the third day, but I was able to receive sick leave from my first month, which was a great help. What is the atmosphere like on site? I think it also depends on each manager's policy, but the atmosphere here is respectful of the freedom of each member. Everyone is an expert, so they act autonomously. How did you feel about writing a blog post? I am afraid it will have some impact on the company's public relations. Question from November newcomer to another department member KINTO Technologies has a Slack channel for hobbies and activities outside of work. Is there anything you are interested in? Just today, I learnt about this channel where every morning when I come to work I just comment "Good Morning!" It is a mystery why such a channel was created, but it is soothing because everyone participating seems to be having fun. F.T Self-introduction I am F.T from the Mobile App Development Group. I am in charge of the Unlimited app for Android. How is your team structured? The Android team consists of 5 members, including myself. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I was surprised that there was a thorough orientation, despite being a mid-career hire. I found it amazing how few boundaries are (both physical and psychological) between teams in the office. (e.g. study sessions with Android developers, communication regardless of OS or assignments people are in charge of) What is the atmosphere like on site? There is a lot of time to work in silence. However, there is a friendly atmosphere where you can ask questions immediately when you need help. How did you feel about writing a blog post? I was full of anxiety. Question from November newcomer to another department member After a month with the company, what do you think is the best thing about joining this company? As an engineer, I am honestly happy to be working in an environment filled with high-skilled professionals. Many are multi-talented and I learn a lot from them even about things unrelated to work. W.Song Self-introduction My name is W. Song from the Data Engineering team at Data Analytics Group. I am mainly responsible for data linkage. How is your team structured? We are four total including the team leader and members. What was your first impression of KINTO Technologies when you joined? Were there any surprises? It's great that there are bookshelves in the office. There are many popular books and I feel that everyone is highly motivated to learn. In fact, it may be my own assumption rather than the surprise or gap. I had seen pictures of the Office before I joined, especially the one of the junction, which looked very stylish. I thought it was a free address office. What is the atmosphere like on site? I think I can speak slowly. Although everyone was busy, they provided me with detailed explanations. I really appreciate it. I feel that this is the first environment in a long time where we can communicate a lot. How did you feel about writing a blog post? I think it is a really great output method. I feel that I can not only promote myself but also connect with people who share similar concerns and ideas, and potentially make friends. Question from November newcomer to another department member What has changed since joining KINTO Technologies? My interest in cars is deepening. I come to work three times a week, so I should be thinner than before. The impression of this emoji 😇 has changed a lot. I used to use this phrase often because I thought it meant "Happy, I did it, it went well," but I was surprised to find out that it actually meant "I'm screwed, I'm finished." Conclusion Thank you very much for sharing your thoughts in the midst of your busy schedule right after joining the company! The number of new members of KINTO Technologies is increasing day by day. I hope you look forward to more articles about our new members joining the company, assigned to various divisions. Furthermore, KINTO Technologies is seeking individuals who can collaborate across various divisions and occupations! For more information, please click here .
アバター
Introduction Hello! This is Hasegawa ( @gotlinan ), an Android engineer at KINTO Technologies! I usually work on the development of an app called myroute. Check out the other articles written by myroute members! Jetpack Compose of myroute Android App A Compose Beginner's Impressive Experience With Preview In this article, I will explain Structured Concurrency using Kotlin coroutines. If you already know about Structured Concurrency, but do not know how to use coroutines, please refer to Convenience Functions for Concurrency . Structured Concurrency? So what is Structured Concurrency? In Japanese, I think it is like "structured parallel processing." Imagine having two or more processes running in parallel, each correctly managing cancellations and errors that may occur. Through this article, let’s learn more about Structured Concurrency! I'll be introducing you two common examples here. 1. Wish to Coordinate Errors The first example is to execute Task 1 and Task 2, and then execute Task 3 based on the results. In the illustration, it should look like this: After executing Task 1 and Task 2, execute Task 3 according to the results. In this case, if an error occurs in Task 1, it is pointless to continue with Task 2. Therefore, if an error occurs in Task 1, Task 2 must be canceled. Similarly, if an error occurs in Task 2, Task 1 should be canceled, eliminating the necessity to proceed to Task 3. 2. Not Wanting to Coordinate Errors The second common example is when there are multiple areas on the screen, each displayed independently. If we create a diagram, it would look like this: Multiple areas on the screen, each displayed independently. In this case, even if an error occurs in Task 1, you may want to display the result of Task 2 or Task 3. Therefore, even if an error occurs in Task 1, Task 2 or 3 must be continued without canceling. I hope these examples were clear to you. With coroutines, the above examples can be easily implemented based on the idea of Structured Concurrency! However, for a deeper understanding is necessary to grasp the basics of coroutines. From the next section we will actually learn about coroutines! If you know the basics, skip to [Convenience Functions for Concurrency](#Convenience Functions for Concurrency). Coroutines Basics Let's talk about the basics of coroutines before explaining it in detail. In coroutines, asynchronous processing can be initiated by calling the launch function from CoroutineScope . Specifically, it looks like this: CoroutineScope.launch { // Code to be executed } So, why do we need to use CoroutineScope ? We need to because in asynchronous processing, "which thread to execute" and "how to behave in case of cancellation or error" are very important. CoroutineScope has a CoroutineContext . A coroutine run on a given CoroutineScope is controlled based on CoroutineContext . Specifically, CoroutineContext consists of the following elements: Dispatcher : Which thread to run on Job : Execution of cancellations, propagation of cancellations and errors CoroutineExceptionHandler : Error handling When creating a CoroutineScope , each element can be passed with the + operator. And a CoroutineContext is inherited between parent-child coroutines. For example, suppose you have the following code: val handler = CoroutineExceptionHandler { _, _ -> } val scope = CoroutineScope(Dispatchers.Default + Job() + handler) scope.launch { // Parent launch { // Child 1 launch {} // Child 1-1 launch {}// Child1-2 } launch {} // Child 2 } In this case, CoroutineContext is inherited as follows. Inheritance of CoroutineContext Well, if you look at the image, it looks like Job has been newly created instead of inheriting it, doesn't it? This is not a mistake. Although I stated that the " CoroutineContext is inherited between parent-child coroutines," strictly speaking, it is more correct to say that a" CoroutineContext is inherited between parent-child coroutines except for Job . Then, what about Job ? Let's learn more about it in the next section! What is a Job? What is a Job in Kotlin coroutines? In short, it would be something that "controls the execution of the coroutine" Job has a cancel method, which allows developers to cancel started coroutines at any time. val job = scope.launch { println("start") delay(10000) // Long Process println("end") } job.cancel() // start (printed out) // end (not printed out) The Job associated with viewModelScope and lifecycleScope , which Android engineers often use, are canceled at the end of their respective lifecycles. This allows the developer to correctly cancel any ongoing processes without requiring users to be mindful of switching screens. Such is the high importance of a Job , which also plays the role to propagate cancellations and errors between parent and child coroutines. In the previous section, I talked about how Job is not inherited, but using that example, Job can have a hierarchical relationship as shown in the image below. Hierarchical Relationship of Job A partial definition of Job looks like this: public interface Job : CoroutineContext.Element { public val parent: Job? public val children: Sequence<Job> } It allows parent-child relationships to be maintained, and it seems that parent and child Job can be managed when cancellations or errors occur. From the next chapter, let's see how the coroutine propagates cancellations and errors through the hierarchical relationships of Jobs ! Propagation of cancellations If the coroutine is canceled, the behavior is as follows. Cancels all of its child coroutines Does not affect its own parent coroutine *It is also possible to execute a coroutine that is not affected by the cancellation of the parent coroutine by changing CoroutineContext to NonCancellable . I will not talk about this part in this article since it deviates from the theme of Structured Concurrency. cancellation affects downward in the Job hierarchy. In the example below, if Job2 is canceled, the coroutine running on Job2 , Job3 , and Job4 will be canceled. Propagation of cancellations Propagation of Errors Actually, Job can be broadly divided into Job and SupervisorJob . Depending on each, the behavior when an error occurs will vary. I have summarized the behavior in the two tables below: one for when an error occurs in its own Job, and the other for when an error occurs in a child Job . When an error occurs in Job Child Job its own Job to Parent Job Job Cancel all Complete with errors Propagate error SupervisorJob Cancel all Complete with errors No propagate error When an error propagates from Child Job other child jobs its own Job to Parent Job Job Cancel all Complete with errors Propagate error SupervisorJob No action No action No propagate error The images representing the behavior when an error occurs with reference to the two tables are as follows for Job and SupervisorJob respectively. For Job If an error occurs in Job2 of a normal Job The Child Job, Job3 and Job4 will be canceled Its own Job, Job2 completes with errors Propagates the error to the Parent Job Job1 Cancels Job1 's other Child Job, Job5 . Job1 completes with errors For SupervisorJob If an error occurs in Job2 of a normal Job The Child Job, Job3 and Job4 will be canceled Its own Job, Job2 completes with errors Propagates the error to the Parent SupervisorJob, Job1 As a reminder, the SupervisorJob1 with the error propagated does not cancel the other Child Job ( Job5 ), and normally completed itself. Moreover, you can use invokeOnCompletion to check whether Job was completed normally, by error, or by cancellation. val job = scope.launch {} // Some work job.invokeOnCompletion { cause -> when (cause) { is CancellationException -> {} // cancellation is Throwable -> {} // other exceptions null -> {} // normal completions } } Exceptions Not Caught By the way, how about exceptions not caught by coroutine? For example, what happens if an error occurs or propagates in Job at TopLevel? what happens if an error occurs or propagates in SupervisorJob ? And so on. The answers are: CoroutineExceptionHandler is called if specified. If CoroutineExceptionHandler is not specified, the thread's default UncaughtExceptionHandler is called. As mentioned earlier in Coroutines Basics , CoroutineExceptionHandler is also a companion to CoroutineContext . It can be passed as follows: val handler = CoroutineExceptionHandler { coroutineContext, throwable -> // Handle Exception } val scope = CoroutineScope(Dispatchers.Default + handler) If CoroutineExceptionHandler is not specified, the thread's default UncaughtExceptionHandler is called. If the developer wishes to specify, write as follows: Thread.setDefaultUncaughtExceptionHandler { thread, exception -> // Handle Uncaught Exception } I had misunderstood until writing this article that if I used SupervisorJob , the application would not complete because the error would not propagate. However, SupervisorJob only does not propagate errors on the coroutine's Job hierarchy. Therefore, if either of the above two types of handlers are not defined accordingly, it may not work as intended. For example, in an Android app, the default thread UncaughtExceptionHandler causes the app to complete (crash) unless specified by the developer. On the other hand, executing normal Kotlin code will just display an error log. Also, slightly off topic, you may be wondering whether try-catch or CoroutineExceptionHandler should be used. When an error is caught by CoroutineExceptionHandler , the coroutine Job has already completed and cannot be returned. Basically, you can use try-catch for recoverable error. When implementing based on the idea of Structured Concurrency, or when you want to log errors, setting up a CoroutineExceptionHandler seems like a good approach. Convenience Functions For Concurrency The explanation was a little long, but in coroutines, functions such as coroutineScope() and supervisorScope() are used to achieve Structured Concurrency. coroutineScope() 1. Remember Wish to Coordinate Errors? You can use coroutineScope() in such an example. coroutineScope() waits until all started child coroutines are completed. If an error occurs in a child coroutine, the other child coroutines will be canceled. The code would be as follows: Child coroutine 1 and Child coroutine 2 are executed concurrently Child coroutine 3 is executed after Child coroutine 1 and Child coroutine 2 are finished Regardless of which Child coroutine encounters an error, the others will be canceled. scope.launch { coroutineScope { launch { // Child 1 } launch { // Child 2 } } // Child 3 } supervisorScope() 2. Remember Wish Not to Coordinate Errors? You can use supervisorScope() in such an example. supervisorScope() also waits until all started child coroutines are completed. Also, if an error occurs in a child coroutine, the other child coroutines will not be canceled. The code would be as follows: Child coroutine 1, Child coroutine 2 and Child coroutine 3 are executed concurrently Errors in any child coroutine do not affect other child coroutine scope.launch { supervisorScope { launch { // Child 1 } launch { // Child 2 } launch { // Child 3 } } } Summary How was it? I hope you now have a better understanding of Structured Concurrency. While there may have been several basics to cover, understanding these basics will help you when navigating more complex implementations. And once you can write structured concurrency well, enhancing the local performance of the service will become relatively easy. Why not consider Structured Concurrency if there are any bottlenecks needlessly running in series? That's it for now!
アバター
Introduction Hey there! I'm Viacheslav Vorona, an iOS engineer. This year, my colleagues and I had an opportunity to visit try! Swift Tokyo , an event that got me thinking about some tendencies within the Swift community. Some of them are fairly new, while others have been around for a while but have recently evolved. Today, I would like to share my observations with you. The elephant in the room... Let's get it out of the way: the much anticipated Apple Vision Pro was released roughly 2 months before try! Swift, so it only makes sense that the conference room was full of Apple fans excited about it. People who haven't tried the gear out yet, were looking for any opportunity to put it on their heads for a couple of minutes, pinching the air with their fingers. All seats in the room were occupied during the talk about the implementation of a visionOS app by Satoshi Hattori . The application itself was as simple as it could get: just a circular timer floating in the virtual space in front of the user, but once Hattori-san actually connected the headset and started to show the results of his work in real time, the audience went wild. I could also mention that spatial computing enthusiasts organized their own small, unofficial meeting on the second day of the conference. Unlike some other devices from Apple, the Vision Pro is forming its own quite noticeable sub-community within the Swift community. All the geeks who grew up watching futuristic virtual devices in movies are now feeling like they are getting closer to their cyberpunk dreams. It's exciting—or scary, depending on your perspective. The choice is yours. Oh, and of course, we can't move to the next topic without an honorable mention of the "Swift Punk" performance at the conference opening, which was also inspired by Vision Pro. $10000+ worth of swag scenic props New Swift frontiers This trend is not quite new, but recently it is getting some exciting development in multiple directions at once. I am talking about the Swift community striving to escape its Apple devices homeland and expand beyond. Some things like server-side Swift have been around for a while, for example, Vapor was out since 2016 and even though it wasn't widely adopted, it keeps running. Tim Condon from the Vapor Core Team did a great presentation on large codebase migration at try! Swift. The topic was largely inspired by the migration Vapor is undergoing at the moment to fully support Swift Concurrency by version 5.0. According to Tim, that version is likely to be released in summer 2024, so if you are interested in trying out server-side Swift, that might be a great time to start. Tim Condon, the man behind Vapor. Nice shirt, by the way. To accompany your Swift-written API, you might also try to implement a webpage using that same Swift language. Conveniently, that was the topic of the talk done by Paul Hudson . His lecture on leveraging Swift result builders for HTML generation was clear and exciting, just as one would expect from such an experienced educator as Paul. The climax of his speech was the announcement of Ignite , a new site builder by Paul using the exact same principle he was talking about in his speech. Paul Hudson, the man behind... a lot of things. Including Ignite from now on. Another memorable presentation that falls into this category was done by Saleem Abdulrasool , a big cross-platform Swift enthusiast, who was talking about differences and similarities between Windows and macOS and the challenges Swift developers would face should they try to make a Windows application. Last, but not least, there was a curious presentation by Yuta Saito , who was talking about tactics to reduce Swift binaries. This topic might seem unrelated to the trend I'm talking about here, but that changed when Saito-san showed the audience a simple Swift app deployed to Playdate , a tiny handheld console. Truly impressive. It is pleasing to see that Swift is not only gaining new capabilities on Apple platforms but also relentlessly exploring new frontiers. Friend Computer Lastly, I would like to talk about AIs, LLMs, and so on, a topic that was all over the place during the last couple of years and keeps emerging every time a new "more-powerful-than-everything-else" model is released. In a digital gold rush, software companies nowadays are trying to apply AI processing to anything possible. Of course, the Swift community could not stay unaffected by it, and at try! Swift, this phenomenon was reflected in multiple different ways. One of the first presentations at the conference, done by Xingyu Wang , an engineer from Duolingo, was dedicated to the Roleplay feature introduced by her company in collaboration with OpenAI. She discussed the utilization of an AI-powered backend, optimization challenges such as AI-generated responses taking significantly longer times, and the techniques and tricks Xingyu's team applied to mitigate them. Overall, the presentation was optimistic, painting a bright image of the endless opportunities provided by AI. On the other side of the spectrum, there was a talk by Emad Ghorbaninia titled "What Can We Do Without AI in the Future?" which caught my attention before the conference. I was quite curious about what it would entail. The talk turned out to be a thoughtful reflection on the challenges we, as developers and humans, are about to face with the further development of AI. To put it simply, Emad's general thought is that we should focus on the most human aspects of our creative process to not lose the race against the incoming generation of silicon-brained developers. Hard to disagree. Conclusion Reflecting on the diverse discussions at try! Swift Tokyo, it's fascinating to see how the Swift community continuously evolves and adapts to new technological landscapes. From embracing groundbreaking hardware like the Apple Vision Pro to exploring new realms with server-side Swift and AI integrations, these developments highlight a community in flux, responsive to the broader tech environment. This curiosity and willingness to innovate ensure that Swift is not just a language confined to iOS development but a broader toolset that pushes the boundaries of what's possible in software. As we look forward, the dynamic interplay between technology and developer creativity within the Swift community promises to bring even more exciting advancements. It's a thrilling time to be part of this vibrant ecosystem.
アバター
Introduction Konichiwa! I'm Felix, and I develop iOS applications at KINTO Technologies. This was my first experience at a Swift-focused conference. From March 22nd to 24th, 2024, I attended the try! Swift 2024 Tokyo event held in Shibuya. It offered an excellent chance to delve into the latest industry trends and network with other engineers. Presentations Among the many compelling presentations, I would like to highlight two that stood out to me. First, I would like to talk about the Duolingo's AI Tutor feature. The speaker, Xingyu Wang, shared insights of implemenation of their AI tutor feature. She also presented challenges such as building a chat interface and optimizing latency for helpful phrases, along with solutions leveraging GPT-4's capabilities. It was nice that she provided insights into the entire architecture, addressing not just the frontend but also the current challenges they are encountering. Personally, I had a similar goal of developing an English learning app for Japanese users before. Having this knowledge would be invaluable in creating similar services. Their incorporation of a sophisticated roleplay feature enhances learners' ability to practice conversational skills in a lifelike setting. Next up, Pointfree, renowned in the community for their frameworks, particularly impressed me with their presentation on testing Swift macros, introduced in Swift version 5.9. Macros, which are compiler plugins, enhance Swift code by generating new code, diagnostics, and fix-its. The duo of presenters introduced the complexities of writing and testing these macros, highlighting the nuances of Swift. They also demonstrated how their testing library, swift-macro-testing, enhances Apple's tools by making the macro testing process more streamlined, efficient, and effective. This presentation showcased their deep understanding of Swift and their innovative approach to improving development workflows. Booths The booth area was bustling as attendees were eager to interact with companies and pick up some giveaways. Cyber Agent's booth was particularly engaging, featuring a whiteboard where attendees could write code Recaps on post-its. This interactive activity was both educational and effective in boosting enthusiasm. At this conference, after the presentations, there was an innovative approach to the usual Q&A sessions. Instead of a formal setting, those with questions could meet directly with speakers at designated booth areas. I think this setup allowed for more personal interactions, where attendees could engage in conversations, ask their questions, and socialize, facilitating better communication and networking opportunities. Workshop On the final day of the conference, attendees could choose their preferred workshops. I opted for the workshop on TCA and sat towards the back of a large room accommodating about 200 participants. The workshop mainly offered a walkthrough on using the Composable Architecture to develop a sample "SyncUp" app. Initially, I tried to code along but eventually just observed. An interesting aspect was the framework's structured approach to managing side effects, making parts of the app that interact with the external world both testable and understandable. The unit testing process seemed particularly streamlined and clear. Conclusion Attending the try! Swift 2024 Tokyo conference was a highly enriching experience. It provided a unique platform to immerse myself in cutting-edge Swift technologies and connect with industry leaders and peers. The presentations were insightful, providing in-depth explorations of real-world challenges and creative solutions in iOS development. The interactive booth sessions and specialized workshops added great value, enhancing learning and networking opportunities. To anyone reading this, I definately recommend attending the next year Try Swift!
アバター
はじめに こんにちは。 KINTOテクノロジーズ モバイルアプリ開発グループの中口です。 KINTOかんたん申し込みアプリ の開発をしたり、 iOSチームの勉強会やイベントの企画を行ったりしています。 2024年3月22日-24日で開催された try! Swift Tokyo 2024 に、iOSチームから8名が参加しました。 後日、勉強会の一環で振り返りLT大会を開催したのでその様子をまとめます。 8名の中から5名はLTによる発表を行い、残り3名はKTCテックブログにて記事を公開しています。 テックブログの記事はこちら。 Recap of Try! Swift Tokyo 2024 Trying! Swift Community in 2024 もう一名の記事は後日公開予定!! LT会の様子 普段はiOSチームのメンバーだけで行っているチーム勉強会なのですが、 本日は全社的な勉強会サポーターをしている「学びの道の駅」メンバー (詳しくはこちら) や、 Androidチームのメンバーにもゲスト参加いただきました。 総勢20名を超える参加者となり非常に賑やかな会を開くことができました。 オンライン会場はこちら! みなさん、良い笑顔です😀 オフライン会場はこちら! この日は雨の影響か花粉の影響か在宅勤務のメンバーが多くオフラインは少し寂しかったですが、 こちらもみなさん、良い笑顔です😀 また、iOSチームの勉強会ではSlackに専用スレッドを立てて、みなさんにガヤガヤとコメントで盛り上げていただくのですが、1時間で150以上のコメントがつくほどの大盛況でした! 1人目 杜さん 参加されたセッションに関して幅広く感想を述べていらっしゃいました! また、運営スタッフさん、同時通訳の方への感謝もしっかり述べられていたのが印象的でした。 杜さんは、業務で使っているSwiftUIやTCAに関して普段から深い部分まで理解して使われている印象があるのですが今回のtry! Swiftでも、基礎を深掘りするようなセッションが多く、知見を深められたのではないかと思います。 杜さんの発表の様子です! 2人目 日野森さん( ヒロヤ@お腹すいた ) スタッフとして、3日間携わっておりその時の裏話をたくさん話していただきました! こちら でも記事が上がっているのでご覧ください!! 実はあのセッティングも日野森さんが!!というものがたくさんあったみたいです。 3日間でスタッフとして働く日野森さんを何度も見ましたがとても忙しそうでした。。。 ただ、2日目のクロージングの時にオーガナイザー、スピーカー、スタッフ全員が壇上に集合するシーンはとても感動的で、その中にいる日野森さんはとても輝いていました。 日野森さんの発表の様子です! 3人目 中口 私のLTになります。 今年はvisionOSのキャッチアップを頑張りたいな、と思っているのでLTでも SwiftでvisionOSのアプリをつくろう(1日目) Apple Vision Proならでは! 空間アプリ開発の始め方(3日目) こちらをピックアップして発表しました。 まだ業務でもプライベートでもvisionOSの開発はやったことはないのですが(もちろん実機も持っていない)、 ものすごくvisionOS欲が高まりました! 私の発表の様子です! 4人目 Ryommさん Zennで既に参加レポートを公開されており、そちらについて発表いただきました。(03/23に公開👀、、、早すぎる!!) try! Swift Tokyo 2024に参加しました! https://zenn.dev/ryomm/articles/e1683c1769e259 セッションを全体的に振り返りつつ、スポンサーブースやアフターパーティーでの様子も振り返っていただきました。 驚くべきコミュ力で、スピーカー含め多くの方と情報交換をしたとか! 隣の人と話し始めるコツは度胸と「ちわーっ」だそうです!! こういう場所はみんな誰かと話したがっているはずなので、どんどん話しかけてみるべきですね!見習いたい😭 Ryommさんの発表の様子です! 5人目 goseoさん 良いアプリケーションをデザインするための感覚の持ち方(1日目)の感想を発表いただきました! 実際にセッション内で紹介されていたソースコードを試してみたり、それをSwiftUIで実装した場合の感想などを紹介されておりました。 アニメーションに関して、SwiftUIではまだまだ癖があることを教えていただき勉強になりました! goseoさんの発表の様子です(後日別日で実施しました)! 終わりに 5年ぶりに開催されたtry! Swiftでしたが私自身は初参加でした。また、普段はカンファレンス参加はオンラインばかりでオフラインは初めてでした。とても勉強になり、貴重な体験ができました。次回以降はスポンサーやスタッフなどで参加して、もっと深く関わってみたいなと思いました。 今回は、try! Swiftに参加した後にアウトプット(LT大会やブログの執筆)に繋げられたことがチーム全体としてとても良い取り組みだと思いました。 次回以降のtry! SwiftやiOSDCなど大型のカンファレンスでは引き続き同様の活動を続けていければと思います!
アバター
KINTOテクノロジーズで my route(iOS) を開発しているRyommです。 Snapshot Test のリファレンス画像を任意のディレクトリに作成する方法の解説です。 結論 verifySnapshot(of:as:named:record:snapshotDirectory:timeout:file:testName:line:) メソッドを使えばディレクトリを指定できます。 背景 先日 Snapshot Test を導入した記事を書きましたが、しばらく運用してテストファイルが非常に多くなり、目的のテストファイルを探すのがとても大変な状況になりました。 ![大量のSnapshotTestファイル](/assets/blog/authors/ryomm/2024-04-26/01-yabatanien.png =150x) 大量の Snapshot Test ファイル そこで Snapshot Test ファイルを適当なサブディレクトリで分けることにしましたが、使用している Snapshot Test のライブラリ pointfreeco/swift-snapshot-testing の assertSnapshots(of:as:record:timeout:file:testName:line:) メソッドではリファレンス画像の作成場所を指定することはできません。 既存の Snapshot Test 関連のディレクトリ構造は以下のようになっています。 App/ └── AppTests/ └── Snapshot/ ├── TestVC1.swift ├── TestVC2.swift │ └── __Snapshots__/ ├── TestVC1/ │ └── Refarence.png └── TestVC2/ └── Refarence.png サブディレクトリにテストファイルを移動したとき、上記のメソッドでは以下のようにサブディレクトリ内に __Snapshots__ ディレクトリが作成され、さらにその中にテストファイルと同じ名前のディレクトリとリファレンス画像が作成される形になってしまいます。 App/ └── AppTests/ └── Snapshot/ ├── TestVC1/ │ ├── TestVC1.swift │ └── __Snapshots__/ │ └── Refarence.png ← ここに作られる😕 │ └── TestVC2/ ├── TestVC2.swift └── __Snapshots__/ └── Refarence.png ← ここに作られる😕 すでに存在しているCIの仕組みとして App/AppTests/Snapshot/__Snapshots__/ ディレクトリ以下を丸ごとS3に反映させているため、リファレンス画像の置き場所は変えたくありません。 目標とするディレクトリ構成は以下の形です。 App/ └── AppTests/ └── Snapshot/ ├── TestVC1/ │ └── TestVC1.swift ├── TestVC2/ │ └── TestVC2.swift │ └── __Snapshots__/ ← リファレンス画像はここに入れたい😣 ├── TestVC1/ │ └── Refarence.png └── TestVC2/ └── Refarence.png リファレンス画像のディレクトリを指定して Snapshot Test を行う verifySnapshot(of:as:named:record:snapshotDirectory:timeout:file:testName:line:) メソッドを利用すると、ディレクトリを指定することができます。 Snapshot Test に用意されている3つのメソッドは、それぞれ以下の関係になっています。 public func assertSnapshots<Value, Format>( matching value: @autoclosure () throws -> Value, as strategies: [String: Snapshotting<Value, Format>], record recording: Bool = false, timeout: TimeInterval = 5, file: StaticString = #file, testName: String = #function, line: UInt = #line ) { ... } ↓ as strategies に渡した比較する形式に対してforEachで実行する public func assertSnapshot<Value, Format>( matching value: @autoclosure () throws -> Value, as snapshotting: Snapshotting<Value, Format>, named name: String? = nil, record recording: Bool = false, timeout: TimeInterval = 5, file: StaticString = #file, testName: String = #function, line: UInt = #line ) { ... } ↓ を実行して返ってきた値を元にテストする verifySnapshot(of:as:named:record:snapshotDirectory:timeout:file:testName:line:) 実際のコードは こちら で確認することができます。 つまり、内部的に同じことをしていれば直接 verifySnapshot(of:as:named:record:snapshotDirectory:timeout:file:testName:line:) を使っても問題ないということです! ドン! extension XCTestCase { var precision: Float { 0.985 } func testSnapshot(vc: UIViewController, record: Bool = false, file: StaticString, function: String, line: UInt) { assert(UIDevice.current.name == "iPhone 15", "Please run the test by iPhone 15") SnapshotConfig.allCases.forEach { let failure = verifySnapshot( matching: vc, as: .image(on: $0.viewImageConfig, precision: precision), record: record, snapshotDirectory: "任意のパス", file: file, testName: function + $0.rawValue, line: line) guard let message = failure else { return } XCTFail(message, file: file, line: line) } } } my route では元々 strategies に一つの値しか渡していなかったため、 strategies でループさせる処理は端折りました。 さて、ディレクトリを指定することはできたものの、既存の Snapshot Test に準じて、テストファイル名に応じたディレクトリを作成して、その内部にリファレンス画像が作られるようにしたいです。 verifySnapshot(of:as:named:record:snapshotDirectory:timeout:file:testName:line:) に渡すパスは絶対パスにする必要があり、チームで開発しているとそれぞれ環境が異なるため、環境に合わせてパスを生成する処理が必要となります。 非常に愚直でかわいいコードになってしまいましたが、以下のように実装してみました。 extension XCTestCase { var precision: Float { 0.985 } private func getDirectoryPath(from file: StaticString) -> String { let fileUrl = URL(fileURLWithPath: "\(file)", isDirectory: false) let fileName = fileUrl.deletingPathExtension().lastPathComponent var separatedPath = fileUrl.pathComponents.dropFirst() // ここで [String]? 型になる // Snapshotフォルダ以降のパスを削除 let targetIndex = separatedPath.firstIndex(where: { $0 == "Snapshot"})! separatedPath.removeSubrange(targetIndex+1...separatedPath.count) let snapshotPath = separatedPath.joined(separator: "/") // verifySnapshotに渡すときはStringにするので、URL型に戻さずString型で書いちゃう return "/\(snapshotPath)/__Snapshots__/\(fileName)" } func testSnapshot(vc: UIViewController, record: Bool = false, file: StaticString, function: String, line: UInt) { assert(UIDevice.current.name == "iPhone 15", "Please run the test by iPhone 15") SnapshotConfig.allCases.forEach { let failure = verifySnapshot( matching: vc, as: .image(on: $0.viewImageConfig, precision: precision), record: record, snapshotDirectory: getDirectoryPath(from: file), file: file, testName: function + $0.rawValue, line: line) guard let message = failure else { return } XCTFail(message, file: file, line: line) } } } これでリファレンス画像は従来の場所のまま、Snapshot Test をサブディレクトリに分けることができるようになりました。 スナップショットテストを修正したいのに、中々ファイルが見つからない!という不便さを解消することができました。 まだまだ改善の余地はあると思うので、より快適な開発ライフを送れるようにしていきたいです♪
アバター
Spring BatchとDBUnitを使ったテストで起きた問題 自己紹介 こんにちは。プラットフォーム開発部/共通サービス開発グループ[^1][^2][^3][^4][^5][^6]/決済プラットフォームチームの竹花です。 今回は、Spring Batch + DBUnitを使ったテストで遭遇した問題について書きたいと思います。 環境 ライブラリ等 バージョン Java 17 MySQL 8.0.23 Spring Boot 3.1.5 Spring Boot Batch 3.1.5 JUnit 5.10.0 Spring Test DBUnit 1.3.0 遭遇した問題 Spring Boot3 + Spring Batchのテストにおいて、DBUnitを使っている。 BatchはChunkモデルで、ItemReaderでDB検索、ItemWriterでDB更新を行っている。 上記の前提で、Chunkサイズ以上のデータ件数でテスト実行すると、テストが終わらない... 確認したこと、試したこと 現象確認 コード new StepBuilder("step", jobRepository) .<InputDto, OutputDto>chunk( CHUNK_SIZE, transactionManager) .reader(reader) .processor(processor) .writer(writer) .build(); 上記のようなStepを含むバッチを以下のようにテストしていました。 @SpringBatchTest @SpringBootTest @TestPropertySource( properties = { "spring.batch.job.names: hoge-batch", "targetDate: 2023-01-01", }) @Transactional(isolation = Isolation.SERIALIZABLE) @TestExecutionListeners({ DependencyInjectionTestExecutionListener.class, DirtiesContextTestExecutionListener.class, TransactionDbUnitTestExecutionListener.class }) @DbUnitConfiguration(dataSetLoader = XlsDataSetLoader.class) class HogeBatchJobTest { @Autowired private JobLauncherTestUtils jobLauncherTestUtils; @BeforeEach void setUp() { } @Test @DatabaseSetup("classpath:dbunit/test_data_import.xlsx") @ExpectedDatabase( value = "classpath:dbunit/data_expected.xlsx", assertionMode = DatabaseAssertionMode.NON_STRICT_UNORDERED) void launchJob() throws Exception { val jobExecution = jobLauncherTestUtils.launchJob(); assertEquals(ExitStatus.COMPLETED, jobExecution.getExitStatus()); } } テストデータをchunkサイズより少なく設定すると問題なくテストをパスしますが、 テストデータをchunkサイズ以上に設定すると途中でフリーズしてしまい、テストが終わらなくなりました。 (Chunkサイズ1でデータ件数1でも発生) DBのコネクションを疑ってみた Spring Batchは1つのchunkで1トランザクションとなります。 これを並列で処理するのであれば、同時実行数以上のDBコネクションが必要ではと考え、poolサイズを変更して確かめてみました。 spring: datasource: hikari: maximum-pool-size: 10 → 100に変更など しかし、変更しても問題は解消されませんでした... デバッグ開始 debugログを設定して、動作させてみました。 org.springframework.batch.core.step.item.ChunkOrientedTasklet の88行目のログ出力で止まっているようなので、ブレークポイントを貼って実行確認します。 たどり着いたのが、 org.springframework.batch.core.step.tasklet.TaskletStep の408行目。 どうやらセマフォがロックできない(=ロックの解放待ち)となっており、こちらで止まっているようでした。 Spring Batchの深淵へ 引き続き、ステップ実行で処理の流れを追跡しました。 関連する箇所について、ざっくり以下の流れとなっていました。 TaskletStep の doExecute が実行される セマフォを作成 TransactionSynchronization の実装である ChunkTransactionCallback にセマフォを渡し、トランザクション実行と紐付けて RepeatTemplate に設定 chunk分を対象にステップ処理が開始される TaskletStep の doInTransaction でセマフォのロックが行われる ステップの主処理実行 TransactionSynchronizationUtils`でcommitが実行される AbstractPlatformTransactionManager の triggerAfterCompletion メソッドが呼ばれ、処理内の invokeAfterCompletion`が実行される。 invokeAfterCompletion で ChunkTransactionCallback の afterCompletion`メソッドで、セマフォの解放が行われる。 データがまだ残っているなら、 4 に戻る 今回のテスト実行においては、 9 のセマフォの解放が行われないまま、再度 4 を経由して、 5 でフリーズした状態となっていました。 なぜ、セマフォが解放されないのか... 上記の確認の中の セマフォの解放 において、該当コードに以下の判定がありました。 status.isNewSynchronization() が true にならず、 invokeAfterCompletion が実行されませんでした。 org.springframework.transaction.support.DefaultTransactionStatus#isNewSynchronization は以下のようになっています。 /** * Return if a new transaction synchronization has been opened * for this transaction. */ public boolean isNewSynchronization() { return this.newSynchronization; } このトランザクションのために新しいトランザクション同期が開かれたかどうかを返す。 考察 なぜ isNewSynchronization が true にならないのかですが、そこを追え切れていないのが現状です。 ですが、いくつか試行錯誤してみた中のログにそのヒントがある気がしました。 テストクラスに@Transactionalをつけない場合 2024-03-27T08:57:14.527+0000 [Test worker] TRACE o.s.t.i.TransactionInterceptor - Completing transaction for [org.springframework.batch.core.repository.support.SimpleJobRepository.update] hoge-batch 19 2024-03-27T08:57:14.527+0000 [Test worker] DEBUG o.s.orm.jpa.JpaTransactionManager - Initiating transaction commit hoge-batch 19 2024-03-27T08:57:14.527+0000 [Test worker] DEBUG o.s.orm.jpa.JpaTransactionManager - Committing JPA transaction on EntityManager [SessionImpl(1075727694<open>)] hoge-batch 19 2024-03-27T08:57:14.534+0000 [Test worker] DEBUG o.s.orm.jpa.JpaTransactionManager - Closing JPA EntityManager [SessionImpl(1075727694<open>)] after transaction hoge-batch 19 2024-03-27T08:57:14.536+0000 [Test worker] DEBUG o.s.b.repeat.support.RepeatTemplate - Repeat operation about to start at count=2 hoge-batch 19 テストクラスに@Transactionalをつけた場合 2024-03-27T09:04:04.600+0000 [Test worker] TRACE o.s.t.i.TransactionInterceptor - Completing transaction for [org.springframework.batch.core.repository.support.SimpleJobRepository.update] hoge-batch 20 2024-03-27T09:04:04.601+0000 [Test worker] DEBUG o.s.b.repeat.support.RepeatTemplate - Repeat operation about to start at count=2 hoge-batch 20 @Transactionalをつけた場合に、 JpaTransactionManager の「Initiating transaction commit...」が出力されていません。 テストクラスは TransactionalTestExecutionListener を使っており、 @Transactional で同一トランザクションで実行されています。 DBUnitで登録したテストデータを、テスト対象処理で参照可能にし、テスト後に破棄(rollback)するためです。 しかし、これによって同一Stepの繰り返し実行時にも既存トランザクションが使いまわされている(=新規のトランザクションが開始されていない)ことで、 isNewSynchronization が true にならないのではないかと結論づけました。 回避方法 TransactionalTestExecutionListener を使わないようにする 力技ですが、 TransactionalTestExecutionListener を使わず自前でテスト後のクリーンアップをすることでフリーズを回避できました。 class HogeTestExecutionListenerChain extends TestExecutionListenerChain { private static final Class<?>[] CHAIN = { HogeTransactionalTestExecutionListener.class, DbUnitTestExecutionListener.class }; @Override protected Class<?>[] getChain() { return CHAIN; } } class HogeTransactionalTestExecutionListener implements TestExecutionListener { private static final String CREATE_BACKUP_TABLE_SQL = "CREATE TEMPORARY TABLE backup_%s AS SELECT * FROM %s"; private static final String TRUNCATE_TABLE_SQL = "TRUNCATE TABLE %s"; private static final String BACKUP_INSERT_SQL = "INSERT INTO %s SELECT * FROM backup_%s"; private static final List<String> TARGET_TABLE_NAMES = List.of( "hoge", "fuga", "dadada"); /** * テスト用作業テーブルを作成する * * @param testContext * @throws Exception */ @Override public void beforeTestMethod(TestContext testContext) throws Exception { val dataSource = (DataSource) testContext.getApplicationContext().getBean("dataSource"); val jdbcTemp = new JdbcTemplate(dataSource); // テスト前に既存データを一時テーブルにbackup TARGET_TABLE_NAMES.forEach( tableName -> jdbcTemp.execute(String.format(CREATE_BACKUP_TABLE_SQL, tableName, tableName))); // テーブル初期化 TARGET_TABLE_NAMES.forEach( tableName -> jdbcTemp.execute(String.format(TRUNCATE_TABLE_SQL, tableName))); } /** * テスト用作業テーブルを削除する * * @param testContext * @throws Exception */ @Override public void afterTestMethod(TestContext testContext) throws Exception { val dataSource = (DataSource) testContext.getApplicationContext().getBean("dataSource"); val jdbcTemp = new JdbcTemplate(dataSource); // テーブルを元に戻す TARGET_TABLE_NAMES.forEach( tableName -> jdbcTemp.execute(String.format(TRUNCATE_TABLE_SQL, tableName, tableName))); TARGET_TABLE_NAMES.forEach( tableName -> jdbcTemp.execute(String.format(BACKUP_INSERT_SQL, tableName, tableName))); } } TransactionDbUnitTestExecutionListenerを外し、TransactionalTestExecutionListenerを使わないようにします。 (エクセルのテストデータ読み込みは利用したいため、DbUnitTestExecutionListenerは使用します) カスタムのTestExecutionListenerを作成し、 事前処理にて対象テーブルのデータを一時テーブルに移し、テスト後に元に戻すようにしています。 beforeTestMethod はテストメソッドよりも前に実行され、 afterTestMethod はテストメソッドよりも後に実行されます。 上記により、Springのトランザクション管理そのままでテスト実行できるようになりました。 所感 検索してもしっくりくるような情報を見つけられず、暗中模索となった問題でした。 とはいえ、Spring Bootのソースを深掘りして見ていくことで、様々な発見もあり、学びのあるコードリーディングであったと思います。 (理解は追いついていませんが...) そもそもSpringやテストライブラリの使い方を間違えているのではないか等、 ライブラリ作成者の前提に基づいて妥当な実装にできているか、他に適切なクラスなどがあるのではないかなど疑問をもって、 引き続き学ばなければならないことがあるなと感じるとともに、 今後も「どうなってるんだろう?」という好奇心を持って、探究と改善に取り組んでいきたいと思いました。 本記事をお読みいただきありがとうございました。 同様の問題に悩む方の参考になれば幸いです。 [^1]: 共通サービス開発グループメンバーによる投稿 1 [ グローバル展開も視野に入れた決済プラットフォームにドメイン駆動設計(DDD)を取り入れた ] [^2]: 共通サービス開発グループメンバーによる投稿 2 [ 入社 1 年未満メンバーだけのチームによる新システム開発をリモートモブプログラミングで成功させた話 ] [^3]: 共通サービス開発グループメンバーによる投稿 3 [ JIRA と GitHub Actions を活用した複数環境へのデプロイトレーサビリティ向上の取り組み ] [^4]: 共通サービス開発グループメンバーによる投稿 4 [ VSCode Dev Container を使った開発環境構築 ] [^5]: 共通サービス開発グループメンバーによる投稿 5 [ Spring Bootを2系から3系へバージョンアップしました。 ] [^6]: 共通サービス開発グループメンバーによる投稿 6 [ MinIOを用いたS3ローカル開発環境の構築ガイド ]
アバター
Hello. I am @p2sk from the DBRE team. In the DBRE (Database Reliability Engineering) team, our cross-functional efforts are dedicated to addressing challenges such as resolving database-related issues and developing platforms that effectively balance governance with agility within our organization. DBRE is a relatively new concept, so very few companies have dedicated organizations to address it. Even among those that do, there is often a focus on different aspects and varied approaches. This makes DBRE an exceptionally captivating field, constantly evolving and developing. For more information on the background of the DBRE team and its role at KINTO Technologies, please see our Tech Blog article, The need for DBRE in KTC . Having been unable to identify the root cause of a timeout error resulting from lock contention (blocking) on Aurora MySQL, this article provides an example of how we developed a mechanism to consistently collect the necessary information to follow up on causes. In addition, this concept can be applied not only to RDS for MySQL but also to MySQL PaaS for cloud services other than AWS, as well as MySQL in its standalone form, so I hope you find this article useful. Background: Timeout occurred due to blocking A product developer contacted us to investigate a query timeout issue that occurred in an application. The error code was SQL Error: 1205 , which suggests a timeout due to exceeding the waiting time for lock acquisition. We use Performance Insights for Aurora MySQL monitoring. Upon reviewing the DB load during the relevant time period, there was indeed an increase in the " synch/cond/innodb/row_lock_wait_cond " wait event, which occurs when waiting to acquire row locks. Performance Insights Dashboard: Lock waits (depicted in orange) are increasing Performance Insights has a tab called " Top SQL " that displays SQL queries that were executed in descending order of their contribution to DB load at any given time. When I checked this, the UPDATE SQL was displayed as shown in the figure below, but only the SQL that timed out, the blocked SQL was being displayed. Top SQL tab: The update statement displayed is the one on the blocked side "Top SQL" is very useful to identify SQL queries, so for example, it has a high contribution rate during periods of high CPU load. On the other hand, in some cases, such as this one, it is not useful to identify the root cause of blocking. This is because the root cause of SQL (blocker) that is causing the blocking may not by itself contribute to the database load. For example, suppose the following SQL is executed in a session: -- Query A start transaction; update sample_table set c1 = 1 where pk_column = 1; This query is a single row update query with a primary key, so it completes very quickly. However, if the transaction is left open and the following SQL is executed in another session, it will be waiting for lock acquisition and blocking will occur. -- Query B update sample_table set c2 = 2 Query B continues to be blocked, so it will appear in "Top SQL" due to longer wait times. Query A, conversely, completes execution instantly and does not appear in "Top SQL," nor is it recorded in the MySQL slow query log. This example is extreme, but it illustrates a case in which it is difficult to identify blockers using Performance Insights. In contrast, there are cases where Performance Insights can identify blockers. For example, the execution of numerous identical UPDATE SQL queries can lead to a "Blocking Query = Blocked Query" scenario. In such case, Performance Insights is sufficient. However, the causes of blocking are diverse, and current Performance Insights has its limitations. Performance Insights was also unable to identify the blocker in this incident. We looked through various logs to determine the cause. Various logs were reviewed, including Audit Log, Error Log, General Log, and Slow Query Log, but the cause could not be determined. Through this investigation, we found that, currently, there is insufficient information to identify the cause of blocking. However, even if the same event occurs in the future, the situation in which we have no choice but to answer "Because of the lack of information, the cause is unknown," needs to be improved. Therefore, we decided to conduct a "solution investigation" to identify the root cause of the blocking. Solution Investigation We investigated the following to determine potential solutions for this issue: Amazon DevOps Guru for RDS SaaS monitoring DB-related OSS and DB monitoring tools Each of these is described below. Amazon DevOps Guru for RDS Amazon DevOps Guru is a machine learning-powered service to analyze metrics and logs from monitored AWS resources, automatically detects performance and operational issues, and provides recommendations for resolving them. The DevOps Guru for RDS is a feature within DevOps Guru specifically dedicated to detecting DB-related issues. The difference from Performance Insights is that DevOps Guru for RDS automatically analyzes issues and suggests solutions. It conveys the philosophy of AWS to realize a world of "managed solutions to issues in the event of an incident." When the actual blocking occurred, the following recommendations were displayed: DevOps Guru for RDS Recommendations: Suggested wait events and SQL to investigate The SQL displayed was the SQL on the blocked side, and it seemed difficult to identify the blocker. Currently, it seems that it only presents link to document that describes how to investigate when a "synch/cond/innodb/row_lock_wait" wait event is contributing to the DB load. Therefore, currently, it is necessary for humans to make final judgments on the proposed causes and recommendations. However, I feel that in the future a more managed incident response experience will be provided. SaaS monitoring A solution that can investigate the cause of database blocking at the SQL level is Datadog Database Monitoring feature . However, it currently supports only PostgreSQL and SQL Server, not MySQL. Similarly, in tools like New Relic and Mackerel , it appears that the feature to conduct post-blocking investigations is not available. DB-related OSS and DB monitoring tools We also investigated the following other DB-related OSS and DB monitoring tools, but no solutions seemed to be offered. Percona Toolkit Percona Monitoring and Management MySQL Enterprise Monitor On the other hand, the SQL Diagnostic Manager for MySQL was the only tool capable of addressing the MySQL blocking investigation. Despite being a DB monitoring tool for MySQL, we opted not to test or adopt it due to its extensive functionalities exceeding our needs, and the price being a limiting factor. Based on this investigation, we found that there were almost no existing solutions, so we decided to create our own mechanism. Therefore, we first organized the "manual investigation procedure for blocking causes." Since version 2 of Aurora MySQL (MySQL 5.7) is scheduled to reach EOL on October 31 of this year, the target is Aurora 3 (MySQL 8.0). Also, the target storage engine is InnoDB. Manual investigation procedure for blocking causes To check the blocking information in MySQL, you need to refer to the following two types of tables. Note that performance_schema must be enabled by setting the MySQL parameter performance_schema to 1. performance_schema.metadata_locks Contains acquired metadata lock information Check for blocked queries with lock_status = 'pending' records performance_schema.data_lock_waits Contains blocking information at the storage engine level (e.g., rows) For example, if you select performance_schema.data_lock_waits in a situation where metadata-caused blocking occurs, you will not get any record. Therefore, the information stored in the two types of tables will be used together to conduct the investigation. It is useful to use a View that combines these tables with other tables for easier analysis. The following is an introduction. Step 1: Make use of sys.schema_table_lock_waits sys.schema_table_lock_waits is a SQL Wrapper View using the following three tables: performance_schema.metadata_locks performance_schema.threads performance_schema.events_statements_current Selecting View while a Wait is occurring for acquiring metadata lock on the resource, will return the relevant records. For example, the following situation. -- Session 1: Acquire and keep metadata locks on tables with lock tables lock tables sample_table write; -- Session 2: Waiting to acquire incompatible shared metadata locks select * from sample_table; In this situation, select sys.schema_table_lock_waits to get the following recordset. The results of this View do not directly identify the blocker in SQL. The blocked query can be identified in the waiting_query column, but there is no blocking_query column, so I will use blocking_thread_id or blocking_pid to identify it. How to identify blockers: SQL-based method When identifying blockers on an SQL basis, use the thread ID of the blocker. The following query using performance_schema.events_statements_current will retrieve the last SQL text executed by the relevant thread. SELECT THREAD_ID, SQL_TEXT FROM performance_schema.events_statements_current WHERE THREAD_ID = 55100\G The result, for example, should look like this. I found out that it was performing lock tables on sample_table, and I was able to identify the blocker. This method has its drawbacks. If the blocker executes another additional query after acquiring locks, the SQL will be retrieved and the blocker cannot be identified. For example, the following situation. -- Session 1: Acquire and keep metadata locks on tables with lock tables lock tables sample_table write; -- Session 1: Run another query after lock tables Select 1; If you execute a similar query in this state, you will get the following results. Alternatively, performance_schema.events_statements_history can be used to retrieve the last N SQL texts executed by the relevant thread. SELECT THREAD_ID, SQL_TEXT FROM performance_schema.events_statements_history WHERE THREAD_ID = 55100 ORDER BY EVENT_ID\G; The result should look like this: The blocker could also be identified because the history could be retrieved. The parameter performance_schema_events_statements_history_size can be used to change how many SQL history entries are kept per thread (set to 10 during verification). The larger the size, the more likely it is to identify blockers, but this also means using more memory, and there's a limit to how large the size can be, so finding a balance is important. Whether history retrieval is enabled can be checked by selecting performance_schema.setup_consumers . It seems that the performance_schema.events_statements_history retrieval is enabled by default for Aurora MySQL. How to identify blockers: Log-based method When identifying blockers on an log basis, use the General Log and Audit Log. For example, if General Log retrieval is enabled on Aurora MySQL, all SQL history executed by the process can be retrieved using the following query in CloudWatch Logs Insights. fields @timestamp, @message | parse @message /(?<timestamp>[^\s]+)\s+(?<process_id>\d+)\s+(?<type>[^\s]+)\s+(?<query>.+)/ | filter process_id = 208450 | sort @timestamp asc Executing this query results in the following: CloudWatch Logs Insights Query Execution Result: SQL blocked by red box is a blocker We basically enable the General Log. There is a concern that SQL-based blockers will be removed from the history table and cannot be identified. Therefore, we decided to use a log-based identification method this time. Considerations for identifying blockers Identifying blockers ultimately requires human visual confirmation and judgment. The reason is that the lock acquisition information is directly related to the thread, and the SQL executed by the thread changes from time to time. Therefore, in a situation like the example, "the blocker finished executing the query, but the lock has been acquired," it is necessary to infer the root cause SQL from the history of SQL executed by the blocker process. However, just knowing the blocker thread ID or process ID can be expected to significantly improve the rate of identifying the root cause. Step 2: Make use of sys.innodb_lock_waits This is a SQL Wrapper View using the following three tables: performance_schema.data_lock_waits information_schema.INNODB_TRX performance_schema.data_locks If you select this View while a Wait is occurring for lock acquisition implemented by the storage engine (InnoDB), the record will be returned. For example, the following situation. -- Session 1: Keep the transaction that updated the record open start transaction; update sample_table set c2 = 10 where c1 = 1; -- Session 2: Try to update the same record delete from sample_table where c1 = 1; In this situation, select sys.innodb_lock_waits to get the following recordset. From this result, as with sys.schema_table_lock_waits , it is not possible to directly identify the blocker. Therefore, blocking_pid is used to identify the blocker using the log-based method described above. fields @timestamp, @message | parse @message /(?<timestamp>[^\s]+)\s+(?<process_id>\d+)\s+(?<type>[^\s]+)\s+(?<query>.+)/ | filter process_id = 208450 | sort @timestamp asc Executing this query results in the following: CloudWatch Logs Insights Query Execution Result: SQL blocked by red box is a blocker Summary of the above As a first step for post-investigation of root cause of Aurora MySQL blocking, I have outlined how to investigate the root cause when blocking occurs. The investigation procedure is as follows: Identify blocker process ID using two types of Views: sys.schema_table_lock_waits and sys.innodb_lock_waits Use CloudWatch Logs Insights to retrieve the SQL execution history of the process ID from the General Log Identify (estimate) the root cause SQL while visually checking Step 1 must be in a "blocking condition" to get results. Therefore, periodically collecting and storing two types of View equivalent information at N-second intervals enables post-investigation. In addition, it is necessary to select N such that the relationship of N seconds < Application timeout period is valid. Additional information about blocking Here are two additional points about blocking. Firstly, I will outline the difference between deadlocks and blocking, followed by an explanation of the blocking tree. Differences from deadlocks It is rare but blocking is sometimes confused with deadlock, so let's summarize the differences. A deadlock is also a form of blocking, but it is determined that the event will not be resolved unless one of the processes is forced to roll back. Therefore, when InnoDB detects a deadlock, automatic resolution occurs relatively quickly. On the other hand, in the case of normal blocking, there is no intervention by InnoDB because it is resolved when the blocker query is completed. A comparison between the two is summarized in the table below. Blocking Deadlock Automatic resolution by InnoDB Not supported Supported Query completion Both the blocker and the blocked side will eventually complete execution unless terminated midway due to a KILL or timeout error. One transaction is forced to terminate by InnoDB. General solution It can be resolved spontaneously by the completion of the blocker query or resolved with a timeout error after the application-set query timeout period elapses. After InnoDB detects a deadlock, it can be resolved by forcing one of the transactions to roll back. Blocking tree It is not an official term of MySQL, but I will describe the blocking tree. This refers to a situation where "a query that is a blocker is also blocked by another blocker." For example, the following situation. -- Session 1 begin; update sample_table set c1 = 2 where pk_column = 1; -- Session 2 begin; update other_table set c1 = 3 where pk_column = 1; update sample_table set c1 = 4 where pk_column = 1; -- Session 3 update other_table set c1 = 5 where pk_column = 1; In this situation, when you select sys.innodb_lock_waits , you will get two records: "Session 1 is blocking Session 2" and "Session 2 is blocking Session 3." In this case, the blocker from the perspective of Session 3 is Session 2, but the root cause of the problem (Root Blocker) is Session 1. Thus, blocking occurrences can sometimes be tree-like, making log-based investigations in such cases even more difficult. The importance of collecting information on blocking beforehand lies in the difficulty of investigating such blocking-related causes. In the following, I will introduce the design and implementation of the blocking information collection mechanism. Architectural Design We have multiple regions and multiple Aurora MySQL clusters running within a region. Therefore, the configuration needed to minimize the deployment and operational load across regions and clusters. Other requirements include: Functional requirements Can execute any SQL periodically against Aurora MySQL Can collect Aurora MySQL information from any region Can manage the DB to be executed Can store Query execution results in external storage Can stored SQL data be queried Privileges can be managed to restrict access to the collected data in the source database, allowing only those authorized to view it. Non-functional requirements Minimal overhead on the DB to be collected Data freshness during analysis can be limited to a time lag of about five minutes Notification will alert us if the system becomes inoperable Response in seconds during SQL-based analysis Collected logs can be aggregated into some kind of storage in a single location Can minimize the financial costs of operations In addition, the tables to be collected were organized as follows. Table to be Collected Even though you have the option to periodically gather specific results from sys.schema_table_lock_waits and sys.innodb_lock_waits , the system load will increase due to the complexity of these Views when compared to directly selecting data from the original tables. Therefore, to meet the non-functional requirement of 'Minimal overhead on the DB to be collected,' we opted to select the following six tables, which serve as the source for the Views. These views were then constructed on the query engine side, enabling the query load to be shifted away from the database side. Original tables of sys.schema_table_lock_waits performance_schema.metadata_locks performance_schema.threads performance_schema.events_statements_current Original tables of sys.innodb_lock_waits performance_schema.data_lock_waits information_schema.INNODB_TRX performance_schema.data_locks The easiest way would be to use MySQL Event , MySQL's task scheduling feature, to execute SELECT queries on these tables every N seconds and store the results in a dedicated table. However, this method is not suitable for the requirements because it generates a high write load to the target DB and requires individual login to the DB to check the results. Therefore, other methods were considered. Architecture Patterns First, an abstract architecture diagram was created as follows: For this architecture diagram, the AWS services to be used at each layer were selected based on the requirements. Collector service selection After evaluating the following services based on our past experiences, we decided to proceed with the design mainly using Lambda. EC2 It is assumed that this workload does not require as much processing power as running EC2 all the time, and is considered excessive from both an administrative and cost perspective The mechanism depends on the deployment to EC2 and execution environment on EC2 ECS on EC2 It is assumed that this workload does not require as much processing power as running EC2 all the time, and is considered excessive from both an administrative and cost perspective Depends on Container Repository such as ECR ECS on Fargate Serverless like Lambda, but depends on Container Repository such as ECR Lambda It is more independent than other compute layer services, and is considered best suited for performing the lightweight processing envisioned at this time Storage / Query Interface service selection Storage / Query Interface was configured with S3 + Athena. The reasons are as follows. Want to run SQL with JOIN CloudWatch Logs was also considered for storage, but rejected due to this requirement Fast response times and transaction processing are not required No advantage of using DB services such as RDS, DynamoDB, or Redshift Buffer service selection We have adopted Amazon Data Firehose as the buffer layer between the collector and storage. We also considered Kafka, SQS, Kinesis Data Streams, etc., but we chose Firehose for the following reasons. Put to Firehose and data will be automatically stored in S3 (no additional coding required) Can reduce the number of files in S3 by buffering them based on time or data size, enabling bulk storage in S3 Automatic compression reduces file size in S3 Dynamic partitioning feature allows dynamic determination of S3 file paths Based on the services selected above, five architecture patterns were created. For simplicity, the figure below illustrates one region. Option 1: Execute Lambda in MySQL Event Aurora MySQL is integrated with Lambda . This pattern is used to periodically invoke Lambda using MySQL Event. The architecture is as follows: ![Option 1: Architecture Diagram of Lambda Execution Pattern in MySQL Event](/assets/blog/authors/m.hirose/2024-03-12-13-16-16.png =600x) Option 2: Save data directly from Aurora into S3 Aurora MySQL is also integrated with S3 and can store data directly in S3. The architecture is very simple, as shown in the figure below. On the other hand, as in option 1 also requires deployment of MySQL Events, it will be necessary to deploy across multiple DB clusters when creating or modifying new Events. It must be handled manually and individually, or a mechanism must be in place to deploy to all target clusters. ![Option 2: Architecture Diagram of Pattern of Saving Files directly from Aurora to S3](/assets/blog/authors/m.hirose/2024-03-12-13-15-50.png =300x) Option 3: Step Functions Pattern A This pattern combines Step Functions and Lambda. By using the Map state, child workflows corresponding to the collector can be executed in parallel for each target cluster. The process of "executing SQL at N-second intervals" is implemented using a combination of Lambda and Wait state. This implementation results in a very large number of state transitions. For AWS Step Functions Standard Workflows, pricing is based on the number of state transitions, while for Express Workflows, there is a maximum execution time of five minutes per execution, but no charge is incurred based on the number of state transitions. Therefore, Express Workflows are implemented where the number of state transitions is large. This AWS Blog was used as a reference. Option 4: Step Functions Pattern B Like option 3, this pattern combines Step Functions and Lambda. The difference from option 3 is that the process "Execute SQLat N-second intervals" is implemented in Lambda, and "Execute SQL -> N-second Sleep" is repeated for 10 minutes. Since Lambda execution is limited to a maximum of 15 minutes, Step Functions is invoked in EventBridge every 10 minutes. Because the number of state transitions is very small, the financial cost of Step Functions can be reduced. On the other hand, since Lambda will continue to run even during Sleep, the Lambda billing amount is expected to be higher than in option 3. ![Option 4: Step Functions Architecture Diagram of Pattern B](/assets/blog/authors/m.hirose/2024-03-12-13-23-40_en.png =600x) Option 5: Sidecar Pattern We primarily use ECS as our container orchestration service, assuming that there is at least one ECS cluster accessible to each Aurora MySQL. Placing a newly implemented Collector as a Sidecar in a task has the advantage of not incurring additional computing resource costs, such as Lambda. However, if it does not fit within the resources of Fargate, it needs to be expanded. ![Option 5: Architecture Diagram of Sidecar Pattern](/assets/blog/authors/m.hirose/2024-03-12-13-47-37.png =600x) Architecture Comparison The results of comparing each option are summarized in the table below. Option 1 Option 2 Option 3 Option 4 Option 5 Developer and operator DBRE DBRE DBRE DBRE Since the container area falls outside our scope, it is necessary to request other teams Financial costs ☀ ☀ ☀ ☁ ☀ Implementation costs ☁ ☀ ☁ ☀ ☀ Development Agility ☀ (DBRE) ☀ (DBRE) ☀ (DBRE) ☀ (DBRE) ☁ (must be coordinated across teams) Deployability ☁ (Event deployment either requires manual intervention or a dedicated mechanism) ☁ (Event deployment either requires manual intervention or a dedicated mechanism) ☀ (IaC can be managed with existing development flow) ☀ (IaC can be managed with existing development flow) ☁ (must be coordinated across teams) Scalability ☀ ☀ ☀ ☀ ☁ (must be coordinated with Fargate team) Specific considerations Permissions must be configured for IAM and DB users to enable the launching of Lambda functions from Aurora No buffering, so writes to S3 occur synchronously and API executions are frequent Implementing Express Workflows requires careful consideration of the at-least-once execution model. The highest financial cost because Lambda runs longer than necessary The number of Sidecar containers can be the same as the number of tasks, resulting in duplicated processing Based on the above comparison, we adopted option 3, which uses Step Functions with both Standard and Express Workflows. The reasons are as follows. It is expected that the types of collected data will expand, and those who can control development and operations within their own team (DBRE) can respond swiftly. The option to use MySQL Event is simple configuration, yet there are numerous considerations, such as modifying cross-sectional IAM permissions and adding permissions for DB users, and the human cost is high whether automated or covered manually. Even though it costs a little more to implement, the additional benefits offered make option 3 the most balanced choice. In the following sections, I will introduce the aspects devised in the process of implementing the chosen option and the final architecture. Implementation Our DBRE team develops in Monorepo and uses Nx as a management tool. Infrastructure management is handled using Terraform, while Lambda implementation is performed in Go. For more information on the DBRE team's development flow using Nx, please see our Tech Blog article " AWS Serverless Architecture with Monorepo Tools - Nx and Terraform! (Japanese)" Final Architecture Diagram Taking into account multi-region support and other considerations, the final architecture is shown in the figure below. The main considerations are: Express Workflows terminate after four minutes because forced termination is treated as an error after five minutes. Since the number of accesses to DynamoDB is small and latency is not a bottleneck, it is aggregated in the Tokyo region. The data synchronization to S3 after putting to Firehose is asynchronous, so latency is not a bottleneck, and S3 is aggregated to the Tokyo region. To reduce the financial cost of frequent access to Secrets Manager, secret retrieval is performed outside the state loop. To prevent each Express Workflow from being executed multiple times, a locking mechanism is implemented using DynamoDB. Note: Since Express Workflows employ an at-least-once execution model. In the following sections, I will introduce the aspects devised during implementation. Create a dedicated DB user for each DB Only the following two permissions are required to execute the target SQL. GRANT SELECT ON performance_schema.* TO ${user_name}; -- required to select information_schema.INNODB_TRX GRANT PROCESS ON *.* TO ${user_name}; We have created a mechanism whereby DB users with only this permission are created for all Aurora MySQL. We have a batch process that connects to all Aurora MySQL on a daily basis and collects various information . This batch process has been modified to create DB users with the required permissions for all DBs. This automatically created a state in which the required DB users existed when a new DB was created. Reduce DB load and data size stored in S3 Records can be retrieved from some of the six target collection tables even if blocking has not occurred. Therefore, if all the cases are selected every N seconds, the load on Aurora will increase unnecessarily, although only slightly, and data will be stored unnecessarily in S3 as well. To prevent this, the implementation ensures that all relevant tables are selected only when blocking is occurring. To minimize the load, SQL for blocking detection was also organized as follows. Metadata blocking detection The metadata blocking occurrence detection query is as follows. select * from `performance_schema`.`metadata_locks` where lock_status = 'PENDING' limit 1 Only when records are obtained by this query, execute the SELECT query on all three following tables and transmit the results to Firehose. performance_schema.metadata_locks performance_schema.threads performance_schema.events_statements_current InnoDB blocking detection The blocking occurrence detection query of InnoDB is as follows. select * from `information_schema`.`INNODB_TRX` where timestampdiff(second, `TRX_WAIT_STARTED`, now()) >= 1 limit 1; Only when records are obtained by this query, execute the SELECT query on all three following tables and transmit the results to Firehose. performance_schema.data_lock_waits information_schema.INNODB_TRX performance_schema.data_locks Parallel processing of queries using goroutine Even if there is a slight deviation in the timing of SELECT execution on each table, if blocking continues, the probability of data inconsistency occurring when joining later is low. However, it is preferable to conduct them at the same time as much as possible. To achieve a state where "data continues to be collected at N-second intervals, it is also necessary to ensure that the Collector's Lambda execution time is as short as possible. Based on above two points, query execution is handled as concurrently as possible using goroutine. Use of session variables to avoid unexpected overloads Although we confirm in advance that the query load to be executed is sufficiently low, sometimes there may be situations where "the execution time is longer than expected" or "information gathering queries are caught in blocking." Therefore, we set max_execution_time and TRANSACTION ISOLATION LEVEL READ UNCOMMITTED at the session level to continue to obtain information as safely as possible. To implement this process in the Go language, we override the function Connect() in the driver.connector interface in the database/SQL/driver package. The implementation image, excluding error handling, is as follows type sessionCustomConnector struct { driver.Connector } func (c *sessionCustomConnector) Connect(ctx context.Context) (driver.Conn, error) { conn, err := c.Connector.Connect(ctx) execer, _ := conn.(driver.ExecerContext) sessionContexts := []string{ "SET SESSION max_execution_time = 1000", "SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED", } for _, sessionContext := range sessionContexts { execer.ExecContext(ctx, sessionContext, nil) } return conn, nil } func main() { cfg, _ := mysql.ParseDSN("dsn_string") defaultConnector, _ := mysql.NewConnector(cfg) db := sql.OpenDB(&sessionCustomConnector{defaultConnector}) rows, _ := db.Query("SELECT * FROM performance_schema.threads") ... } Locking mechanism for Express Workflows Since StepFunctions' Express Workflows employ an at-least-once execution model, the entire workflows can be executed multiple times. In this case, duplicate execution is not a major problem, but achieving exactly-once execution is preferable, so we implemented a simple locking mechanism using DynamoDB, with reference to the AWS Blog . Specifically, Lambda, which runs at the start of Express workflows, PUT data into a DynamoDB table with attribute_not_exists expression . The partition key specifies a unique ID generated by the parent workflow, it can be determined that "PUT succeeds = you are the first executor." If it fails, it determines that another child workflow is already running, skips further processing and exits. Leveraging Amazon Data Firehose Dynamic Partitioning Firehose Dynamic Partitioning feature is used to dynamically determine the S3 file path. The rule for dynamic partitioning (S3 bucket prefixes) was configured as follows, taking into account access control in Athena as described below. !{partitionKeyFromQuery:db_schema_name}/!{partitionKeyFromQuery:table_name}/!{partitionKeyFromQuery:env_name}/!{partitionKeyFromQuery:service_name}/day=!{timestamp:dd}/hour=!{timestamp:HH}/ If you put json data to Firehose Stream with this setting, it will find the attribute that is the partition key from the attribute in json and automatically save it to S3 with the file path according to the rule. For example, suppose the following json data is put into Firehose. { "db_schema_name":"performance_schema", "table_name":"threads", "env_name":"dev", "service_name":"some-service", "other_attr1":"hoge", "other_attr2":"fuga", ... } As a result, the file path to be saved in S3 is as follows: There is no need to specify any file path when putting to Firehose. It automatically determines the file name and saves it based on predefined rules. The file stored by Firehose in S3: Dynamic Partitioning automatically determines file path Since schema names, table names, etc. do not exist in the SELECT results of MySQL tables, we implemented to add them as common columns when generating JSON to be put into Firehose. Design of Athena tables and access rights Here is an example of creating a table in Athena based on the table definition in MySQL. The CREATE statement on the MySQL side for performance_schema.metadata_locks is as follows: CREATE TABLE `metadata_locks` ( `OBJECT_TYPE` varchar(64) NOT NULL, `OBJECT_SCHEMA` varchar(64) DEFAULT NULL, `OBJECT_NAME` varchar(64) DEFAULT NULL, `COLUMN_NAME` varchar(64) DEFAULT NULL, `OBJECT_INSTANCE_BEGIN` bigint unsigned NOT NULL, `LOCK_TYPE` varchar(32) NOT NULL, `LOCK_DURATION` varchar(32) NOT NULL, `LOCK_STATUS` varchar(32) NOT NULL, `SOURCE` varchar(64) DEFAULT NULL, `OWNER_THREAD_ID` bigint unsigned DEFAULT NULL, `OWNER_EVENT_ID` bigint unsigned DEFAULT NULL, PRIMARY KEY (`OBJECT_INSTANCE_BEGIN`), KEY `OBJECT_TYPE` (`OBJECT_TYPE`,`OBJECT_SCHEMA`,`OBJECT_NAME`,`COLUMN_NAME`), KEY `OWNER_THREAD_ID` (`OWNER_THREAD_ID`,`OWNER_EVENT_ID`) ) This is defined for Athena as follows: CREATE EXTERNAL TABLE `metadata_locks` ( `OBJECT_TYPE` string, `OBJECT_SCHEMA` string, `OBJECT_NAME` string, `COLUMN_NAME` string, `OBJECT_INSTANCE_BEGIN` bigint, `LOCK_TYPE` string, `LOCK_DURATION` string, `LOCK_STATUS` string, `SOURCE` string, `OWNER_THREAD_ID` bigint, `OWNER_EVENT_ID` bigint, `db_schema_name` string, `table_name` string, `aurora_cluster_timezone` string, `stats_collected_at_utc` timestamp ) PARTITIONED BY ( env_name string, service_name string, day int, hour int ) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe' LOCATION 's3://<bucket_name>/performance_schema/metadata_locks/' TBLPROPERTIES ( "projection.enabled" = "true", "projection.day.type" = "integer", "projection.day.range" = "01,31", "projection.day.digits" = "2", "projection.hour.type" = "integer", "projection.hour.range" = "0,23", "projection.hour.digits" = "2", "projection.env_name.type" = "injected", "projection.service_name.type" = "injected", "storage.location.template" = "s3://<bucket_name>/performance_schema/metadata_locks/${env_name}/${service_name}/day=${day}/hour=${hour}" ); The point is the design of partition keys. This ensures that only those who have access permission to the original DB can access the data. We assign two tags to all AWS resources: service_name, which is unique to each service, and env_name, which is unique to each environment. We use these tags as a means of access control. By including these two tags as part of the file path to be stored in S3, and by writing the resource using policy variables for the IAM Policy that is commonly assigned to each service, you can only SELECT data for the partition corresponding to the file path in S3 for which you have access permission, even if they are the same tables. The image below shows the permissions granted to the IAM Policy, which is commonly granted to each service. { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::<bukect_name>/*/${aws:PrincipalTag/env_name}/${aws:PrincipalTag/service_name}/*" ] } Also, this time I wanted to make the partitions maintenance-free, so I used partition projection . When using partition projection, the range of possible values for partition keys must be known, but by using injected projection type , the range of values does not need to be communicated to Athena, allowing for maintenance-free dynamic partitioning. Reproduction of Views in Athena Here is how to create six tables needed for post-blocking investigation in Athena and wrap them with Views, just as in MySQL. The View definition was modified based on the MySQL View definition, such as adding a common column and adding partition key comparisons to the JOIN clause. The Athena definition of `sys.innodb_lock_waits<[1} is as follows. CREATE OR REPLACE VIEW innodb_lock_waits AS select DATE_ADD('hour', 9, w.stats_collected_at_utc) as stats_collected_at_jst, w.stats_collected_at_utc as stats_collected_at_utc, w.aurora_cluster_timezone as aurora_cluster_timezone, r.trx_wait_started AS wait_started, date_diff('second', r.trx_wait_started, r.stats_collected_at_utc) AS wait_age_secs, rl.OBJECT_SCHEMA AS locked_table_schema, rl.OBJECT_NAME AS locked_table_name, rl.PARTITION_NAME AS locked_table_partition, rl.SUBPARTITION_NAME AS locked_table_subpartition, rl.INDEX_NAME AS locked_index, rl.LOCK_TYPE AS locked_type, r.trx_id AS waiting_trx_id, r.trx_started AS waiting_trx_started, date_diff('second', r.trx_started, r.stats_collected_at_utc) AS waiting_trx_age_secs, r.trx_rows_locked AS waiting_trx_rows_locked, r.trx_rows_modified AS waiting_trx_rows_modified, r.trx_mysql_thread_id AS waiting_pid, r.trx_query AS waiting_query, rl.ENGINE_LOCK_ID AS waiting_lock_id, rl.LOCK_MODE AS waiting_lock_mode, b.trx_id AS blocking_trx_id, b.trx_mysql_thread_id AS blocking_pid, b.trx_query AS blocking_query, bl.ENGINE_LOCK_ID AS blocking_lock_id, bl.LOCK_MODE AS blocking_lock_mode, b.trx_started AS blocking_trx_started, date_diff('second', b.trx_started, b.stats_collected_at_utc) AS blocking_trx_age_secs, b.trx_rows_locked AS blocking_trx_rows_locked, b.trx_rows_modified AS blocking_trx_rows_modified, concat('KILL QUERY ', cast(b.trx_mysql_thread_id as varchar)) AS sql_kill_blocking_query, concat('KILL ', cast(b.trx_mysql_thread_id as varchar)) AS sql_kill_blocking_connection, w.env_name as env_name, w.service_name as service_name, w.day as day, w.hour as hour from ( ( ( ( data_lock_waits w join INNODB_TRX b on( ( b.trx_id = cast( w.BLOCKING_ENGINE_TRANSACTION_ID as bigint ) ) and w.stats_collected_at_utc = b.stats_collected_at_utc and w.day = b.day and w.hour = b.hour and w.env_name = b.env_name and w.service_name = b.service_name ) ) join INNODB_TRX r on( ( r.trx_id = cast( w.REQUESTING_ENGINE_TRANSACTION_ID as bigint ) ) and w.stats_collected_at_utc = r.stats_collected_at_utc and w.day = r.day and w.hour = r.hour and w.env_name = r.env_name and w.service_name = r.service_name ) ) join data_locks bl on( bl.ENGINE_LOCK_ID = w.BLOCKING_ENGINE_LOCK_ID and bl.stats_collected_at_utc = w.stats_collected_at_utc and bl.day = w.day and bl.hour = w.hour and bl.env_name = w.env_name and bl.service_name = w.service_name ) ) join data_locks rl on( rl.ENGINE_LOCK_ID = w.REQUESTING_ENGINE_LOCK_ID and rl.stats_collected_at_utc = w.stats_collected_at_utc and rl.day = w.day and rl.hour = w.hour and rl.env_name = w.env_name and rl.service_name = w.service_name ) ) In addition, the Athena definition of sys.schema_table_lock_waits is as follows. CREATE OR REPLACE VIEW schema_table_lock_waits AS select DATE_ADD('hour', 9, g.stats_collected_at_utc) as stats_collected_at_jst, g.stats_collected_at_utc AS stats_collected_at_utc, g.aurora_cluster_timezone as aurora_cluster_timezone, g.OBJECT_SCHEMA AS object_schema, g.OBJECT_NAME AS object_name, pt.THREAD_ID AS waiting_thread_id, pt.PROCESSLIST_ID AS waiting_pid, -- sys.ps_thread_account(p.OWNER_THREAD_ID) AS waiting_account, -- Not supported because it is unnecessary, although it is necessary to include it in select when collecting information in MySQL. p.LOCK_TYPE AS waiting_lock_type, p.LOCK_DURATION AS waiting_lock_duration, pt.PROCESSLIST_INFO AS waiting_query, pt.PROCESSLIST_TIME AS waiting_query_secs, ps.ROWS_AFFECTED AS waiting_query_rows_affected, ps.ROWS_EXAMINED AS waiting_query_rows_examined, gt.THREAD_ID AS blocking_thread_id, gt.PROCESSLIST_ID AS blocking_pid, -- sys.ps_thread_account(g.OWNER_THREAD_ID) AS blocking_account, -- Not supported because it is unnecessary, although it is necessary to include it in select when collecting information in MySQL. g.LOCK_TYPE AS blocking_lock_type, g.LOCK_DURATION AS blocking_lock_duration, concat('KILL QUERY ', cast(gt.PROCESSLIST_ID as varchar)) AS sql_kill_blocking_query, concat('KILL ', cast(gt.PROCESSLIST_ID as varchar)) AS sql_kill_blocking_connection, g.env_name as env_name, g.service_name as service_name, g.day as day, g.hour as hour from ( ( ( ( ( metadata_locks g join metadata_locks p on( ( (g.OBJECT_TYPE = p.OBJECT_TYPE) and (g.OBJECT_SCHEMA = p.OBJECT_SCHEMA) and (g.OBJECT_NAME = p.OBJECT_NAME) and (g.LOCK_STATUS = 'GRANTED') and (p.LOCK_STATUS = 'PENDING') AND (g.stats_collected_at_utc = p.stats_collected_at_utc and g.day = p.day and g.hour = p.hour and g.env_name = p.env_name and g.service_name = p.service_name) ) ) ) join threads gt on(g.OWNER_THREAD_ID = gt.THREAD_ID and g.stats_collected_at_utc = gt.stats_collected_at_utc and g.day = gt.day and g.hour = gt.hour and g.env_name = gt.env_name and g.service_name = gt.service_name) ) join threads pt on(p.OWNER_THREAD_ID = pt.THREAD_ID and p.stats_collected_at_utc = pt.stats_collected_at_utc and p.day = pt.day and p.hour = pt.hour and p.env_name = pt.env_name and p.service_name = pt.service_name) ) left join events_statements_current gs on(g.OWNER_THREAD_ID = gs.THREAD_ID and g.stats_collected_at_utc = gs.stats_collected_at_utc and g.day = gs.day and g.hour = gs.hour and g.env_name = gs.env_name and g.service_name = gs.service_name) ) left join events_statements_current ps on(p.OWNER_THREAD_ID = ps.THREAD_ID and p.stats_collected_at_utc = ps.stats_collected_at_utc and p.day = ps.day and p.hour = ps.hour and p.env_name = ps.env_name and p.service_name = ps.service_name) ) where (g.OBJECT_TYPE = 'TABLE') Results Using the mechanism created, I will actually generate blocking and conduct an investigation in Athena. select * from innodb_lock_waits where stats_collected_at_jst between timestamp '2024-03-01 15:00:00' and timestamp '2024-03-01 16:00:00' and env_name = 'dev' and service_name = 'some-service' and hour between cast(date_format(DATE_ADD('hour', -9, timestamp '2024-03-01 15:00:00'), '%H') as integer) and cast(date_format(DATE_ADD('hour', -9, timestamp '2024-03-01 16:00:00'), '%H') as integer) and day = 1 order by stats_collected_at_jst asc limit 100 If you execute the above query in Athena, specifying the time period during which the blocking occurred, the following results will be returned. Since we do not know the blocker's SQL, we use CloudWatch Logs Insights based on the process ID (blocking_pid column) to check the history of SQL executed by the blocker. fields @timestamp, @message | parse @message /(?<timestamp>[^\s]+)\s+(?<process_id>\d+)\s+(?<type>[^\s]+)\s+(?<query>.+)/ | filter process_id = 215734 | sort @timestamp desc The following results indicate that the blocker SQL is update d1.t1 set c1 = 12345 . The same procedure can now be used to check metadata-related blocking status in schema_table_lock_waits. Future Outlook As for the future outlook, the following is considered: As deployment to the product is scheduled for the future, accumulate knowledge on incidents caused by blocking through actual operations. Investigate and tune bottlenecks to minimize Lambda billed duration. Expand collection targets in performance_schema and information_schema Expand the scope of investigations, including analysis of index usage. Improve the DB layer's problem-solving capabilities through a cycle of expanding the information collected based on feedback from incident response. Visualization with BI services such as Amazon QuickSight Create a world where members unfamiliar with performance_schema can investigate the cause. Summary This article details a case where investigating the cause of timeout errors due to lock contention in Aurora MySQL led to the development of a mechanism for periodically collecting necessary information to follow up on the cause of the blocking. In order to follow up on blocking information in MySQL, we need to periodically collect wait data on two types of locks, metadata locks and InnoDB locks, using the following six tables. Metadata locks performance_schema.metadata_locks performance_schema.threads performance_schema.events_statements_current InnoDB locks performance_schema.data_lock_waits information_schema.INNODB_TRX performance_schema.data_locks Based on our environment, we designed and implemented a multi-region architecture capable of collecting information from multiple DB clusters. Consequently, we were able to create a SQL-based post-investigation with a time lag of up to five minutes after the occurrence of blocking, with results returned in a few seconds. Although these features may eventually be incorporated into SaaS and AWS services, our DBRE team values proactively implementing features on our own if deemed necessary. KINTO Technologies' DBRE team is looking for people to join us! Casual chats are also welcome, so if you are interested, feel free to DM me on X. Don't forget to follow us on our recruitment Twitter too! Appendix : References The following articles provide a clear summary of how to investigate blocking in the MySQL 8.0 series, and use them as a reference. Checking the row lock status of InnoDB [Part 1] Checking the row lock status of InnoDB [Part 2] Isono, Let's Show MySQL Lock Contention Once the blocker is identified, further investigation is required to determine the specific locks causing the blocking. The following article provides a clear and understandable summary of MySQL locks. About MySQL Locks I also referred to the MySQL Reference Manual. The metadata_locks Table The data_locks Table InnoDB INFORMATION_SCHEMA Transaction and Locking Information
アバター
Introduction Nice to meet you, I am Somi, and I work on developing the my route app for Android at KINTO Technologies Corporation. my route is an app that enriches travel experiences by providing various functions such as "Odekake Information" (information on traffic and the outings you want to do), "Search by Map," and "Odekake Memo" (a notepad function). The my route Android team is currently heavily using Jetpack Compose to improve the UI/UX. This UI toolkit improves code readability and lets us develop the UI quickly and flexibly. Also, the declarative UI approach simplifies the development process and makes UI components more reusable. With this information in mind, I will talk about some examples of Jetpack Compose features used in the my route Android app. In this article, I will talk about four features. Functionalities 1. drawRect and drawRoundRect Jetpack Compose uses Canvas to make it possible to draw in a specific range. drawRect and drawRoundRect are functions related to shapes that can be defined inside a Canvas. drawRect draws a rectangle with a specified offset and size, while drawRoundRect has all of the functions of the drawRect, plus the cornerRadius parameter which adjusts the roundness of the corners. my route has a function that reads coupon codes in text format with the device's camera. To accurately recognize codes, parts used to recognize text had to be transparent, and the rest had to be darkened. So, we implemented the UI with drawRect and drawRoundRect. @Composable fun TextScanCameraOverlayCanvas() { val overlayColor = MaterialTheme.colors.onSurfaceHighEmphasis.copy(alpha = 0.7f) ... Canvas( modifier = Modifier.fillMaxSize() ) { with(drawContext.canvas.nativeCanvas) { val checkPoint = saveLayer(null, null) drawRect(color = overlayColor) drawRoundRect( color = Color.Transparent, size = Size(width = layoutWidth.toPx(), height = 79.dp.toPx()), blendMode = BlendMode.Clear, cornerRadius = CornerRadius(7.dp.toPx()), topLeft = Offset(x = screenWidth.toPx(), y = rectHeight.toPx()) ) restoreToCount(checkPoint) } } } The above code is implemented with the following UI. To explain the code, drawRect uses a color specified with overlayColor to darken the whole screen. In addition, we used drawRoundRect to create a transparent rectangle with rounded corners to make it clear that text inside the area would be recognized. 2. KeyboardActions and KeyboardOptions KeyboardActions and KeyboardOptions are classes that belong to the TextField component. TextField is a UI element that handles inputs and allows you to set the type of keyboard characters that appear in the input field using KeyboardOptions. KeyboardActions can then define what happens when the Enter key is pressed. In the account screen in my route, there is a place where you store your credit card information for payments. Since the part where user enters the card number is related to the device's keyboard, we implemented it with KeyboardActions and KeyboardOptions. @Composable fun CreditCardNumberInputField( value: String, onValueChange: (String) -> Unit, placeholderText: String, onNextClick: () -> Unit = {} ) { ThinOutlinedTextField( ... singleLine = true, keyboardOptions = KeyboardOptions( keyboardType = KeyboardType.Number, imeAction = ImeAction.Next ), keyboardActions = KeyboardActions( onNext = { onNextClick() } ) ) } The above code is implemented with the following UI. So that only credit card numbers could be entered, KeyboardActions sets the KeyboardType to Number, and ImeAction. Next is set so that the input moves as you type. KeyboardOptions also makes it so that the onNextClick() method is executed when the "Next" button on the keyboard is pressed. By the way, onNextClick() is set up in Fragments as follows. CreditCardNumberInputField( ... onNextClick = { binding.creditCardHolderName.requestFocus() } ) With these settings, when the "Next" button is pressed, you will go from entering the credit card number to the next step, entering your name. 3. LazyVerticalGrid LazyVerticalGrid displays items in a grid format. This grid can be scrolled through vertically and displays many items (or lists of unknown length). Also, the number of columns is adjusted according to the size of the screen, so items can be displayed effectively on various screens. The "This month's events" section in my route provides information on many events happening in the area where you are currently located. There was too much event information to be implemented in columns (title, image, event period), so we used LazyVerticalGrid to display event items in containers that could be scrolled through up and down over several rows. private const val COLUMNS = 2 LazyVerticalGrid( columns = GridCells.Fixed(COLUMNS), modifier = Modifier .padding(start = 16.dp, end = 16.dp), horizontalArrangement = Arrangement.spacedBy(16.dp), verticalArrangement = Arrangement.spacedBy(20.dp) ) { items(eventList.size) { index -> val item = eventList[index] EventItem( event = item, modifier = Modifier.singleClickable { onItemClicked(item) } ) } } The above code is implemented with the following UI. The image and title have been removed due to copyright. The items can now be displayed in a grid at regular intervals based on the size of the data in eventList, and the event information can be viewed constantly. 4. Drag And Drop The draggable modifier lets the user drag and drop something inside a screen component. If you need to control an entire drag flow, you use pointerInput. In my route, there is a function called "my station" that allows you to register up to 12 stations or bus stops. They are displayed in a card list format, so you can see it at a glance. This card list can be reordered freely and requires drag-and-drop operation to be implemented. itemsIndexed(stationList) { index, detail -> val isDragged = index == lazyColumnDragDropState.draggedItemIndex MyStationDraggableItem( detail = detail, draggableModifier = Modifier.pointerInput(Unit) { detectDragGestures( onDrag = { change, offset -> lazyColumnDragDropState.onDrag(scrollAmount = offset.y) lazyColumnDragDropState.scrollIfNeed() }, onDragStart = { lazyColumnDragDropState.onDragStart(index) }, onDragEnd = { lazyColumnDragDropState.onDragInterrupted() }, onDragCancel = { lazyColumnDragDropState.onDragInterrupted() } ) }, modifier = Modifier.graphicsLayer { val offsetOrNull = lazyColumnDragDropState.draggedItemY.takeIf { isDragged } translationY = offsetOrNull ?: 0f } .zIndex(if (isDragged) 1f else 0f) ) val isPinned = lazyColumnDragDropState.initialDraggedItem?.index == index if (isPinned) { val pinContainer = LocalPinnableContainer.current DisposableEffect(pinContainer) { val pinnedHandle = pinContainer?.pin() onDispose { pinnedHandle?.release() } } } } The above code is implemented with the following UI. Drag operations are detected by pointerInput and the detectDragGestures function processes drag events. When an item is dragged, the onDrag, onDragStart, onDragEnd, and onDragCancel methods of the lazyColumnDragDropState object are called, and the drag state is managed. provides the effect of updating and visually moving the Y-axis position of an item in a drag. This code also uses the isPinned variable and the LocalPinnableContainer to prevent items being dragged leaving the screen when the user scrolls. Summary This was a simple explanation, and you might not understand some parts right away, but that is how you use my route. At first, I felt rewriting the my route UI from an XML layout a little complicated as I was not used to Jetpack Compose. But I could understand the code written in Jetpack Compose very quickly; I found it is a very efficient way in terms of readability and maintenance. We will continue to improve the UX in my route by using Jetpack Compose in different ways. Thank you for reading to the end.
アバター
はじめに(活動の概要紹介) KINTOテクノロジーズで「学びの道の駅」という取り組みが始まりました! 「学び」+「道の駅」ってどういうことでしょうか?当社では アウトプットカルチャーを推進 しており、その活動として、テックブログやイベント登壇など様々な取り組みを行っております。 では、アウトプットの推進力となるのは何でしょうか? アウトプットの前提としての「インプット」つまり「学習した内容」というのがとても重要になると私たちは考えています。社内の 学習する力を強化 していく、そんなチームが立ち上がりました。今回のプロジェクトも有志で集まって活動が始まっています。 「道の駅」という言葉にもいろいろな想いが込められています。みなさんは道の駅を利用したことがありますでしょうか?道の駅は様々な地方の特産物が集まるコミュニティであり、旅人が身体を休ませる場所であり、他では出会うことが出来ない様々な未知の世界に遭遇できる素敵な「 居場所 」だと私たちは考えています。 そこで「学び」という旅を続けるみんなが気軽に立ち寄って、 新しい出会い に心ときめくそんな、みんなが集まって 元気をもらえる ような居場所を作る「道の駅」を作り出したいという想いから「学びの道の駅」が誕生しました。 「学びの道の駅」は何をやるのか? 社内の「勉強会」と「勉強会」が交わる「道の駅」として、勉強会を軸にした社内活性を支援します。 社内広報活動 今度、こういうテーマで勉強会やるよ! 気になるあの勉強会、どんな感じなんだろう? 勉強会の支援 勉強会を始めてみたいけど、どうやって始めると良いのか? 勉強会の運営しているけど、盛り上がらない… などの、お悩み相談 【運営メンバーに聞いてみた】どんな想いから「学びの道の駅」にたどり着きましたか? 中西: 私は「人生=学び」だと日々考えています。人は常に新たなことを学ぶことで生きがいを見つけたり、心の拠り所を見つけたり、人生に活力を与えてくれます。いままで出会って素敵だなと魅力的に感じる方は皆さん常に新たなことを学び続けている方でキラキラと輝いていました。会社全体にキラキラと輝く人たちが集う場所が作れたら毎日の仕事でもより良いものづくりができるようになると思っていた中で、「社内の勉強会の情報が散らばっている」「どんな学びの環境があるか知りたい」という声が社内から届くようになり、今回のプロジェクト立ち上げに繋がりました。 HOKA: 私は普段は人事の仕事をしているので、従業員の皆さんと面談している中で「もっとグループ間を越えたコミュニケーションがしたい」という声をいただき、漠然と「何とかしたいな」という気持ちを持っていました。同時に、業務を通して「KINTOテクノロジーズで活躍している人は勉強会に参加しているな」と気づくことがありました。この2点が交差して「学びながら交流できる仕組みが必要?!」というアイデアが生まれを上司に相談したところ、きんちゃんと中西さんを紹介され、「学びの道の駅」が文字通り爆誕しました。 きんちゃん: 私は15年以上前から様々な場面で「勉強会」の文化に触れてきました。KINTOテクノロジーズは、私が入社したときから既に「学びが仕事に溶け込んでいる」とても良い文化を持っていました。この良い文化をもっともっと拡げ、人と組織と事業の成長に貢献したい!という想いから、「勉強会の情報を集める」行動に関わっていく事となりました。 成り立ち 【成り立ち1】社内の勉強会の情報をまとめたい! KINTOテクノロジーズは、「勉強会、輪読会」といった「社員が有志で学ぶ活動」が活発な組織です。 社内で色々な勉強会が行われているけど、「どこで?」「いつ?」行われているのかが分からない!知りたい!もっと学びたい!という声を色々耳にする機会があり、それを見える化したいね!という想いが、私たちの活動の原点になります。 ということで、早速、情報収集をしてみると短期間で40件ほど勉強会が存在することが分かりました。他にも隠れ勉強会の存在を把握しているので、小規模なものも含めておそらく60以上の勉強会が社内で行われているのでは無いかと推測しています。 そこで、「こんなに勉強会が活発なのってすごくない?」と思った3人が集い、話し始めたのが2023年の11月末。 【成り立ち2】何をやろう!? まず、最初のミーティングでは、やりたい事を列挙していきました。勉強会を片っ端から突撃してみる?テックブログでどんどん発信する?等のアイデアが出たものの、まずは私たちのことを社内で知ってもらうことが重要では?という仮説に至りました。 そこで、私たちは3週間後の12/21に開催される社内LT大会に参加することにしました。まだ「学びの道の駅」には触れず、3人それぞれがLTに登壇し、きんちゃんは見事優勝(パチパチ)。まずは社内の人に自分たちを知ってもらうという行動を起こしました。 ※詳しくはLT大会のテックブログをご参照ください↓↓ 社内限定のLT大会を開催しました! 【成り立ち3】インセプションデッキを作ろう! 2023年12月27日に行われたMTGでは、「やりたいことがたくさんある私たちには指針が必要」ということに気づきました。そこで、年明けから「インセプションデッキ」を作成することにしました。インセプションデッキとは、ソフトウェア開発プロジェクトにおいて、メンバー全員がプロジェクトの開発に共通認識と目標を持って取り組むために作成されます。私たちは下記4項を明文化しました。 我々はなぜここにいるのか エレベーターピッチ やらないことリスト 俺たちのAチーム 明文化したことにより、「学びの道の駅」というプロジェクト名も自然にイメージが沸いて来て、迷わず決めることができました。 インセプションデッキを作る途中、協調学習の話や、ソース原理の話を交え、私たちはそれぞれ「学び」への想いを語らいました。インセプションデッキを作る工程自体も、私たちにとっての学びになっている、と感じた瞬間でした。 ついにエンジン始動!! インセプションデッキが出来上がったのが2024年1月後半。出来上がった時、私たちは少し焦っていました。なぜならば、インセプションデッキを作ったことで、やりたいこと・やるべきことが明確になり、一刻も早く動き出したかったからです。(インセプションデッキの提案者であるきんちゃんは「しめしめ、予想通りだ」とこっそり喜んでいたとかいないとか) 動き出す第一歩として月次で開催されているKINTOテクノロジーズ全メンバーが集まる「部会」で、私たち「学びの道の駅」が誕生したことを発表しました! それと同時に、「突撃!となりの勉強会」も開始していました。 2月22日に合同勉強会を運営されている皆さんに会議室に集まって頂きインタビューをしたのです。事前に企画書やインタビュー項目も作らず、スマホを出してその場で録音。インタビューする側もされる側も「え?この場で?!」という若干の戸惑いはありながらも、協力してくれました。(みなさんありがとう!) 後日、Podcastとして流せるよう不要コメントをカットし、無事に3月13日には全社Slackで全従業員にお披露目することができました。 今後について その後、私たちは3つの勉強会に突撃し、Podcastを2本公開し、Blogを2本取り掛かりながら、改めて振り返りと自分たちの今後について話しました 皆が知りたいことは何? 勉強会そのものに興味があるのか? 運営している側は、何を知ってもらいたいか? などを話し合った結果、「勉強会により、目的やニーズはそれぞれ違う。それぞれの個性に合わせたストーリーを個別に組み立てる方が良い」という結論に至りました。 また、 Podcastはなんの役割? 勉強会の広告宣伝としてのコンテンツ? 社内報としてのコンテンツ? といった点についても検討した結果、「こんなにもたくさんの勉強会をやっていることがKTCの日常」、すなわち「勉強会文化が根付いていることを見える化できれば目的達成なのでは」という結論に至りました。今後については、勉強会に突撃しながらPodcastを作り、失敗があれば学び、伸ばすところは伸ばす、という活動方針で進める事にしました! 実は、この「動きながら、ふりかえり、軌道修正して、より良い方向へ進んでいく」というアジャイルな進め方に一人、冷や汗をかいているHOKAがいました。KTCに入社するまで情報の取り扱いにはルールやフローが決まっている会社で働いていたからです。 「学びの道の駅」の事務局として活動することにより、人事でありながらKTCの開発方針「小さく作って大きく育てる」を学ぶ機会となっています。 「学びの道の駅」は始まったばかり。これからも時々、KINTOテックブログに登場する予定ですので、どうぞよろしくお願いいたします。
アバター
KINTOテクノロジーズで my route(iOS) を開発しているRyommです。 同じく開発メンバーの保坂さんと張さん、そしてパートナー1名の計4名でスナップショットテストを導入・実装しました。 はじめに my routeアプリには現在SwiftUI化を進めていこうという流れがあり、そのための布石としてスナップショットテストを導入することにしました。 my routeにおけるSwiftUI化は、土台はUIViewControllerのまま、中のコンテンツだけをまずSwiftUIに置き換える形で進めるため、ここで実装したスナップショットテストはそのまま利用できる想定です。 ここでは、スナップショットテストをUIKitで構築されたアプリに適用するにあたって試行錯誤したテクニックを紹介します。 Snapshot Testとは コードの改修前と改修後のスクリーンショットに差分が出ていないかを確認できるテストです。 ライブラリは、Point-Freeの https://github.com/pointfreeco/swift-snapshot-testing を使用しています。 my routeでは、以下のようにXCTestCaseを拡張してassertSnapshotsをラップしたメソッドを作成しています。 閾値が98.5%になっているのは、非常に細かい許容範囲の差異が成功になるように色々試して出た値です。 extension XCTestCase { var precision: Float { 0.985 } func testSnapshot(vc: UIViewController, record: Bool = false, file: StaticString, function: String, line: UInt) { assert(UIDevice.current.name == "iPhone 15", "Please run the test by iPhone 15") // SnapshotConfigはテストする端末一覧のenum SnapshotConfig.allCases.forEach { assertSnapshots(matching: vc, as: [.image(on: $0.viewImageConfig, precision: precision)], record: record, file: file, testName: function + $0.rawValue, line: line) } } } 画面ごとのスナップショットテストは以下のように書いています。 final class SampleVCTests: XCTestCase { // snapshot test 録画モードか否か var record = false func testViewController() throws { let SampleVC = SampleVC(coder: coder) let navi = UINavigationController(rootViewController: SampleVC) navi.modalPresentationStyle = .fullScreen // ここでライフサイクルメソッドが一通り呼び出される UIApplication.shared.rootViewController = navi // viewDidLoad以降のライフサイクルメソッドが実行端末分呼ばれる testSnapshot(vc: navi, record: record, file: #file, function: #function, line: #line) } } Tips viewWillAppear以降にAPIで取得したデータがViewに反映されるのを待ちたい APIで取得したデータがViewに反映されてからスナップショットテストが実行されるようにしたいですが、画面に反映される前にスナップショットテストが実行されてしまい、インジケーターが表示されたままになってしまうなどの問題がありました。 素のままではAPI通信後のデータがViewに反映されたかどうかを判断するのが難しいため、判定用のデリゲートを用意します。 protocol BaseViewControllerDelegate: AnyObject { func viewDidDraw() } ViewControllerクラスで、上で用意したデリゲートに準拠したデリゲートプロパティを作成し、初期化時に特に指定されなかった場合はnilになるようにしておきます。 class SampleVC: BaseViewController { // ... weak var baseDelegate: BaseViewControllerDelegate? // .... init(baseDelegate: BaseViewControllerDelegate? = nil) { self.baseDelegate = baseDelegate super.init(nibName: nil, bundle: nil) } // ... } APIを呼び出して画面に反映している場面、例えばCombineで結果を受け取って画面に反映したあとに baseDelegate.viewDidDraw() を呼び出すことで、スナップショットテスト側に結果をViewに反映ができたことを教えられるようになります。 someAPIResult.receive(on: DispatchQueue.main) .sink(receiveValue: { [weak self] result in guard let self else { return } switch result { case .success(let item): self.hideIndicator() self.updateView(with: item) // データ反映完了のタイミング self.baseDelegate?.viewDidDraw() case .failure(let error): self.hideIndicator() self.showError(error: error) } }) .store(in: &cancellables) baseDelegate.viewDidDraw() が実行されるのを待ちたいため、スナップショットテストにXCTestExpectationを追加します。 final class SampleVCTests: XCTestCase { var record = false var expectation: XCTestExpectation! func testViewController() throws { let SampleVC = SampleVC(coder: coder, baseDelegate: self) let navi = UINavigationController(rootViewController: SampleVC) navi.modalPresentationStyle = .fullScreen UIApplication.shared.rootViewController = navi expectation = expectation(description: "callSomeAPI finished") wait(for: [expectation], timeout: 5.0) viewController.baseViewControllerDelegate = nil testSnapshot(vc: navi, record: record, file: #file, function: #function, line: #line) } func viewDidDraw() { expectation.fulfill() } } 反映したいAPIから取得するデータが複数あるとき(= baseDelegate.viewDidDraw() を複数箇所で呼びたいとき)は、 expectedFulfillmentCount や assertForOverFulfill を指定します。 final class SampleVCTests: XCTestCase { var record = false var expectation: XCTestExpectation! func testViewController() throws { let SampleVC = SampleVC(coder: coder, baseDelegate: self) let navi = UINavigationController(rootViewController: SampleVC) navi.modalPresentationStyle = .fullScreen UIApplication.shared.rootViewController = navi expectation = expectation(description: "callSomeAPI finished") // viewDidDraw()が2回呼ばれるとき expectation.expectedFulfillmentCount = 2 // 指定した回数を超えてviewDidDraw()が呼ばれる可能性があるとき、超えた分は無視する expectation.assertForOverFulfill = false wait(for: [expectation], timeout: 5.0) viewController.baseViewControllerDelegate = nil testSnapshot(vc: navi, record: record, file: #file, function: #function, line: #line) } func viewDidDraw() { expectation.fulfill() } } 前の画面のbaseViewControllerDelegateが残っていると、全画面を通してスナップショットテストを実行した際に testSnapshot() を呼び出したタイミングでviewDidLoad以降のライフサイクルメソッドが実行端末分呼ばれるため、APIも再度実行され、 viewDidDraw() も実行されてしまい、multiple calls エラーになります。 そのため、 wait() の後にbaseViewControllerDelegateをクリアしています。 端末でframeがズレる スナップショットテストでは複数端末のスナップショットを生成できますが、一部の端末でパーツの配置やサイズがズレてしまう問題がありました。 ずれとる これは、スナップショットテストの実行のライフサイクルに起因しています。 スナップショットテストではある一つの端末で起動し、その後別の端末は再読み込みをせずにサイズを変えて再描画されます。つまり、 viewDidLoad() は最初の一度のみ実行され、その他の端末分は viewWillAppear() から実行されます。 対処法としては、テストしたいViewControllerをラップしたMockViewControllerを作成し、 viewDidLoad() で読んでいるメソッドを viewWillAppear() で呼ぶように上書きします。 import XCTest @testable import App final class SampleVCTests: XCTestCase { // snapshot test 録画モードか否か var record = false func testViewController() throws { // 画面を呼び出す時と同様に書く let storyboard = UIStoryboard(name: "Sample", bundle: nil) let SampleVC = storyboard.instantiateViewController(identifier: "Sample") { coder in // スナップショットテスト用にラップしたVC MockSampleVC(coder: coder, completeHander: nil) } let navi = UINavigationController(rootViewController: SampleVC) navi.modalPresentationStyle = .fullScreen UIApplication.shared.rootViewController = navi testSnapshot(vc: navi, record: record, file: #file, function: #function, line: #line) } } class MockSampleVC: SampleVC { required init?(coder: NSCoder) { fatalError("init(coder: \\(coder) has not been implemented") } override init?(coder: NSCoder, completeHander: ((_ readString: String?) -> Void)? = nil) { super.init(coder: coder, completeHander: completeHander) } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) // 以下は本来viewDidLoad()で呼び出しているメソッド super.setNavigationBar() super.setCameraPreviewMask() super.cameraPreview() super.stopCamera() } } まだ直らない・・・ それでも描画がズレる場合、 layoutIfNeeded() メソッドを呼び出し、フレームを更新すると多くの場合で直りました。 import XCTest @testable import App final class SampleVCTests: XCTestCase { var record = false func testViewController() throws { let storyboard = UIStoryboard(name: "Sample", bundle: nil) let SampleVC = storyboard.instantiateViewController(identifier: "Sample") { coder in MockSampleVC(coder: coder, completeHander: nil) } let navi = UINavigationController(rootViewController: SampleVC) navi.modalPresentationStyle = .fullScreen UIApplication.shared.rootViewController = navi testSnapshot(vc: navi, record: record, file: #file, function: #function, line: #line) } } fileprivate class MockSampleVC: SampleVC { required init?(coder: NSCoder) { fatalError("init(coder: \\(coder) has not been implemented") } override init?(coder: NSCoder, completeHander: ((_ readString: String?) -> Void)? = nil) { super.init(coder: coder, completeHander: completeHander) } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) // 描画系メソッドを呼ぶ前にframeを更新 self.videoView.layoutIfNeeded() self.targetView.layoutIfNeeded() super.setNavigationBar() super.setCameraPreviewMask() super.cameraPreview() super.stopCamera() } } いい感じ Webview画面のスナップショット Webviewで表示している画面のコンテンツについては関知しないが、配置しているツールバーなどはスナップショットテストを適応したい、という場面があると思います。 そのような場合、WebViewをロードする部分をWebView自体の設定とは切り分け、テストで呼ばないようにモックすると良いです。 実装側で self.webview.load(urlRequest) などを呼んでWebViewのコンテンツを表示するメソッドと、WebView自体の設定をしているメソッドと切り分けています。 // VCの実装 class SampleWebviewVC: BaseViewController { // ... override func viewDidLoad() { super.viewDidLoad() self.setNavigationBar() **self.setWebView()** self.setToolBar() } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) **self.setWebViewContent()** } // ... /** * WebViewの設定とコンテンツの設定をするメソッドを分ける */ /// WebViewを設定する func setWebView() { self.webView.uiDelegate = self self.webView.navigationDelegate = self // Webページ読み込み状態の監視 webViewObservers.append(self.webView.observe(\\.estimatedProgress, options: .new) { [weak self] _, change in guard let self = self else { return } if let newValue = change.newValue { self.loadingProgress.setProgress(Float(newValue), animated: true) } }) } /// WebViewにコンテンツを設定する private func setWebViewContent() { let request = URLRequest(url: self.url, cachePolicy: .reloadIgnoringLocalCacheData, timeoutInterval: 60) self.webView.load(request) } // ... } そしてテスト対象のVCをラップしたモックではWebViewのコンテンツをロードするメソッドを呼ばないようにします。 import XCTest @testable import App final class SampleWebviewVCTests: XCTestCase { private let record = false func testViewController() throws { let storyboard = UIStoryboard(name: "SampleWebview", bundle: .main) let SampleWebviewVC = storyboard.instantiateViewController(identifier: "SampleWebview") { coder in MockSampleWebviewVC(coder: coder, url: URL(string: "<https://top.myroute.fun/>")!, linkType: .hoge) } let navi = UINavigationController(rootViewController: SampleWebviewVC) navi.modalPresentationStyle = .fullScreen UIApplication.shared.rootViewController = navi testSnapshot(vc: navi, record: record, file: #file, function: #function, line: #line) } } fileprivate class MockSampleWebviewVC: SampleWebviewVC { override init?(coder: NSCoder, url: URL, linkType: LinkNamesItem?) { super.init(coder: coder, url: url, linkType: linkType) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } override func viewWillAppear(_ animated: Bool) { // viewDidLoadで呼び出していたメソッドをviewWillAppearで呼ぶように変更 self.setNavigationBar() self.setWebView() self.setToolBar() super.viewWillAppear(animated) } override func viewDidAppear(_ animated: Bool) { // Do nothing // WebViewのコンテンツ設定をするメソッドを呼ばないように上書き } } カメラを呼び出している画面のスナップショット カメラを呼び出し、その上にカスタマイズしたViewを表示している画面もスナップショットしたいです。しかし、シミュレータ上ではカメラが動かないので、どうにかカメラ部分を無効化しつつオーバーレイ部分をテストできるようにする必要があります。 シミュレーター上でカメラ画像が動くようにダミー映像を差し込めるようにする案もありましたが、メインではない画面のスナップショットテストのためだけに導入するのもコストが見合わず悩みどころです。 myrouteのスナップショットテストでは、カメラ映像の入力を取り込んだり、AVCaptureVideoPreviewLayerで表示するキャプチャーを設定したりする部分を丸ごと呼ばないようにモックで上書きするようにしました。こうすることで、入力のないAVCaptureVideoPreviewLayerが真っ白な画面として表示され、その上にカスタマイズしたViewを表示することができます。 実際の実装では以下のように書かれているところを・・・ class UseCameraVC: BaseViewController { // ... override func viewDidLoad() { super.viewDidLoad() self.videoView.layoutIfNeeded() setNavigationBar() setCameraPreviewMask() do { guard let videoDevice = AVCaptureDevice.default(for: AVMediaType.video) else { return } let videoInput = try AVCaptureDeviceInput(device: videoDevice) as AVCaptureDeviceInput if captureSession.canAddInput(videoInput) { captureSession.addInput(videoInput) let videoOutput = AVCaptureVideoDataOutput() if captureSession.canAddOutput(videoOutput) { captureSession.addOutput(videoOutput) videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue.main) } } } catch { return } cameraPreview() } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) // シミュレータの時はカメラが使えないので、閉じる #if targetEnvironment(simulator) stopCamera() dismiss(animated: true) #else captureSession.startRunning() #endif } } 以下のようにモックで上書きします。 frameがズレる問題で解説した理由から、 viewDidLoad() で呼び出していたメソッド群も viewWillAppear() で呼ぶようにしています。 class MockUseCameraVC: UseCameraVC { // ... override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) self.videoView.layoutIfNeeded() super.setNavigationBar() super.setCameraPreviewMask() super.cameraPreview() super.stopCamera() } } cameraPreview() メソッドはAVCaptureVideoPreviewLayerで captureSession からカメラ映像を画面に表示していますが、ここの入力がないように上書きしてあるので、白いViewで描画されます。 CI戦略 スナップショットテスト導入初期は、リファレンス画像を単一のS3バケットにアップロードし、レビューの際は都度リファレンス画像をダウンロードしてテストを実行していました。 しかし、あるViewを修正して同時にリファレンス画像を更新した場合に、そのPRがマージされるまでの間は他のPRのテストが通らなくなってしまうという問題がありました。 そこで、リファレンス画像をホストしているバケット内に2つディレクトリを作成しました。 片方はPRレビュー時の画像をホストしており、PRがマージされたらもう一方のディレクトリにコピーします。そうすることで、リファレンス画像の更新があっても他PRのテストの妨げにならないようにしました。 便利なシェルたち my routeでは4つのスナップショット用のシェルを用意しています。 1つ目は現行の画面のリファレンス画像を一通りダウンロードするシェルです。 これにより、ローカルでテストが通るようになります。 # developブランチから切り替えたときに利用 # 例:sh setup_snapshot.sh # リファレンス画像のディレクトリから古いものを掃除 rm -r AppTests/Snapshot/__Snapshots__/ # S3からリファレンス画像をダウンロード aws s3 cp $awspath/AppTests/Snapshot/__Snapshots__ --recursive --profile user 2つ目はPull Requestを作成する際に、変更があるリファレンス画像をPRレビュー用のS3バケットにアップロードするシェルです。 # PRを作成する際に、変更したテストを引数にしてアップロードしてください # 例:sh upload_snapshot.sh ×××Tests path="./SpotTests/Snapshot/__Snapshots__" awspath="s3://strl-mrt-web-s3b-mat-001-jjkn32-e/mobile-app-test/ios/feature/__Snapshots__" if [ $# = 0 ]; then echo "引数がありません" else for testName in "${@}"; do if [[ $testName == *"Tests"* ]]; then echo "$path/$testName" aws s3 cp "$path/$testName" "$awspath/$testName" --exclude ".DS_Store" --recursive --profile user else echo "($0testName) テストが存在しません" fi done fi 3つ目は改修が入った画面のリファレンス画像を個別にダウンロードするシェルです。 画面の変更があるPull Requestのレビュー時に使用します。 # テストレビューする際、対象のテストのリファレンス画像をダウンロードしてください # 例:sh download_snapshot.sh ×××Tests if [ $# = 0 ]; then echo "引数がありません" else rm -r AppTests/Snapshot/__Snapshots__/ for testName in "${@}"; do if [[ $testName == *"Tests"* ]]; then echo "$localpath/$testName" aws s3 cp "$awspath/$testName" "$localpath/$testName" --recursive --profile user else echo "($0testName) テストが存在しません" fi done fi 4つ目は強制的にリファレンス画像を更新するシェルです。 基本的にはテストファイルに修正があった画面のリファレンス画像を自動的にコピーするようにしているので不要ですが、共通部品を修正した場合など、テストファイルを変更せずにリファレンス画像に変更が入る際に有用です。 # 修正したテストファイル以外のリファレンス画像に影響がある場合(共通部品を修正した場合等)、 # 手動でアップロードしてください # マージ後に利用してください # 例:sh force_upload_snapshot.sh ×××Tests if [ $# = 0 ]; then echo "引数がありません" else echo "強制的にAWS S3のdevelopフォルダにアップロードしますか?【yes/no】" read question if [ $question = "yes" ]; then for testName in "${@}"; do if [[ $testName == *"Tests"* ]]; then echo "$localpath/$testName" aws s3 cp "$localpath/$testName" "$awsFeaturePath/$testName" --exclude ".DS_Store" --recursive --profile user aws s3 cp "$localpath/$testName" "$awsDevelopPath/$testName" --exclude ".DS_Store" --recursive --profile user else echo "($testName) テストが存在しません" fi done else echo "終了" fi fi 4つもあるとどれがいつ誰が使うのかわからなくなってしまうので、Taskfileにも定義し、説明をすぐ出せるようにしています。 実行時は、ファイル名の指定など引数を渡すときに -- をつけなくてはいけなかったり、若干長くなるのでシェルをそのまま呼ぶ事が多いですが、この説明のためだけに設定する価値があると思います。 % task task: [default] task -l --sort none task: Available tasks for this project: * default: show commands * setup_snapshot: [For Assignee] [ブランチ切替後] スナップショットテストの修正時など、developブランチから切り替えたときに利用 (例) task setup_snapshot または sh setup_snapshot.sh * upload_snapshot: [For Assignee] [PR作成時] 変更したテストを引数にして、PR確認用のS3へスナップショット画像をアップロード (例) task upload_snapshot -- ×××Tests または sh upload_snapshot.sh ×××Tests * download_snapshot: [For Reviewer] [レビュー時] 対象のテストを引数にして、リファレンス画像をダウンロード (例) task download_snapshot -- ×××Tests または sh download_snapshot.sh ×××Tests * force_upload_snapshot: [For Assignee] [マージ後] 修正したテストファイル以外のリファレンス画像に影響がある場合(共通部品を修正した場合等)に、変更があるテストを引数にして、手動でアップロード (例) task force_upload_snapshot -- ×××Tests または sh force_upload_snapshot.sh ×××Tests また、これはRyommが個人的に設定しているものですが、シェルにプロファイル名がベタ書きになっているのを自分の環境で設定しているプロファイルに書き換えるエイリアスも用意しておくと便利です。(プロファイル名にこだわりがある人向け) ここでは、 user とベタ書きされているプロファイルを myroute-user に書き換えています。 alias sett="gsed -i 's/user/myroute-user/' setup_snapshot.sh && gsed -i 's/user/myroute-user/' upload_snapshot.sh && gsed -i 's/user/myroute-user/' download_snapshot.sh && gsed -i 's/user/myroute-user/' force_upload_snapshot.sh" Bitrise my routeではBitriseを利用してCIを行っています。 スナップショットテストの変更を含むPRがマージされたとき、Bitriseはスナップショットテストの修正があるかどうかを自動的に判断し、リファレンス画像をfeatureフォルダーからdevelopフォルダへコピーします。 これにより、すべての状況においてスナップショットテストが常に正常に動作することができます。 肉眼で判断できないリファレンス画像の差異を炙り出す 目視では違いが分からないものの、スナップショットテストがエラーを吐く時があります。 (3_3)? そんなときは、ImageMagickを用いて重ね合わせて見ると違いを見つけやすいです。 以下のようにコマンドを実行すると・・・ convert Snapshot/refarence.png -color-matrix "6x3: 1 0 0 0 0 0.4 0 1 0 0 0 0 0 0 1 0 0 0" ~/changeColor.png \ && magick Snapshot/failure.png ~/changeColor.png -compose dissolve -define compose:args='60,100' -composite ~/Desktop/blend.png \ && rm ~/changeColor.png 画像を重ね合わせて見ることができます。 リファレンス画像の色相を赤っぽくしてから重ね合わせることで、若干見やすくなるかと思います。 さらに使いやすいように、.bashrcに登録しておくのがおすすめです。 compare() { convert $1 -color-matrix "6x3: 1 0 0 0 0 0.4 0 1 0 0 0 0 0 0 1 0 0 0" ~/Desktop/changeColor.png; magick $1 ~/Desktop/changeColor.png -compose dissolve -define compose:args='60,100' -composite ~/Desktop/blend.png; rm ~/Desktop/changeColor.png } ある程度置かれるファイルが同じ場合は、引数に取る値をパス全体ではなくテスト名のみにしても良いかもしれません。 オンライン上にホストされている画像も実行することができるので、レビュー時にも使えます。 さいごに突撃インタビュー! さいごにスナップショットテストを導入してみた感想をインタビューしてきました! 張さん「最初の保坂さんの研究のおかげで、私たちも今はこんな便利な方法でスナップショットを対応できています。その後は、Ryommさんの協力で忘れないよう色々な実装方法がドキュメントとしてまとめて整理されました。本当に良かったと思います。感謝しております。🙇‍♂️」 保坂さん「全体テストするとすごい時間かかるのがネックなので、今後短縮できるような取り組みをしていきたい。」 Ryommとしては、ロジックが変わったときに画面には影響が出ないけどスナップショットテストには影響があることがあって、それの修正がつらいな〜という気持ちが最近芽生えてます。SwiftUI化を進める際には違いがないか確認しやすくて、その点はよかったと思います!
アバター
「GitLabに学ぶ 世界最先端のリモート組織のつくりかた」輪読会が最高だったのでシェアしたい こんにちは、あわっち( @_awache ) です。 今回は「** GitLabに学ぶ 世界最先端のリモート組織のつくりかた ドキュメントの活用でオフィスなしでも最大の成果を出すグローバル企業のしくみ **」という本に魅せられ、輪読会を社内外を巻き込んだイベントにしている取り組みについて共有させていただきます。 告知: 「GitLabに学ぶ 世界最先端のリモート組織のつくりかた」そーだいなる輪読会 フィナーレ を開催します いきなりの告知となりますが大切なことなので最初に書きます。 Cnnpass: 「GitLabに学ぶ 世界最先端のリモート組織のつくりかた」そーだいなる輪読会 フィナーレ 開催日時: 2024-04-25(木) 18:00 ~ 21:00 (17:40 開場) 形式: オフライン 会場: KINTOテクノロジーズ株式会社(略称:KTC)室町オフィス 「** GitLabに学ぶ 世界最先端のリモート組織のつくりかた ドキュメントの活用でオフィスなしでも最大の成果を出すグローバル企業のしくみ **」の輪読会をすでにした方、現在進行形でしている方、そしてこれからしたい方向けの輪読会イベントとなっております。 実際に自分たちで輪読会をしてみてどんな形で輪読会をしたのか?実際反響どうだったのか?この本からどんな気づきを得られたのか?を参加者全員でディスカッションできる場として提供したいと考えています。 まだ枠に余裕がありますので興味ある方はぜひご参加ください!軽い気持ちで来ても楽しめるように工夫してみたいと考えています。 (リモート組織の話をオフラインでしようという矛盾には触れないでおいてください笑) 輪読会によくある課題 継続的な参加者の確保 本を一冊読み終えるためには定期的に集まる必要があります、週に 1回開催するとして適切な分量に分割すると2 ~3 ヶ月くらいは参加者の時間を確保する必要があります 途中フェードアウト 継続的に物事を進めることは非常に難しいです 会を重ねるごとに一人、また一人とフェードアウトしてしまうことは自然な流れです 参加者のモチベーションを繋ぎ止めることができなければ最後にはボッチ輪読会、みたいなことにもなりかねません 途中参加の難しさ 輪読会は本を読んでいくという特性上、途中参加のハードルが上がりやすいので人数は減ることはあっても増えることは非常に珍しい状況です リーダーシップの持続 リードする人の負担は様々なところにあります 事前準備 参加者の時間の確保 ファシリテーション これらは単発ではなくその輪読会が完了するまでずっと続きます 一人でやり続けるには相当なモチベーションが必要となります 参加者の読書速度や理解度の違いを認識する 参加者によって読書のスピードや理解度は異なります、それを認識しないで輪読会を行うと議論が盛り上がらずに退屈な時間が過ぎることになります などなど、本を一冊やり切ることってなんだかんだ難しいですよね。。。僕自身も途中でフェードアウトしたり最後までやりきれないで自然消滅させてしまったりしたことが何度もあります。 ただ、今回はどうしても この本の考え方を社内に広く共有したかった ことと、 最後までやり切りたい という思いが強かったこともあり、どうしたらこれらの課題を解決できるかを考えていました。 例えば、自分たちだけではなく企業の枠を超えて複数の場所で同じ題材で別々の方法で輪読会を実施し、最後に実施した内容をアウトプットするという機会を作ることで課題に対して効果的にアプローチできるのではないかという仮説を立ててみたたのですが、実現には僕だけでやるには限界があります。 そこで、弊社 DBRE チームで技術アドバイザーをしていただいている @soudai1025 さんに相談し、 @soudai1025 さんの協力のもと「そーだいなる輪読会」という企画を始めることにしました。 そーだいなる輪読会の開催 そーだいなる輪読会は複数の企業が このキックオフをきっかけに 3ヶ月程度の時間を設けて輪読会を実施 最後にアウトプットをする という三段構えのイベントです。 キックオフの様子は YouTube に上がっていますのでこちらもぜひご確認ください。 https://www.youtube.com/watch?v=IBgmGtpW15Q 輪読会開催までの道のり キックオフも終わり輪読会を開催するまでに僕が準備したことをざらっと記載してみます。 仲間集め まずは社内のオープンチャンネルで仲間集めです。輪読会に興味がある方を把握し、手を挙げてくれる方を待ちます。結果として 14名の同志 を獲得しました! 写経 (本一冊分書き起こし) 輪読会をリードしていこうと決めた時から写経することの覚悟を決めていました。 読む → 書く→ 見直す、ということを一気にできる写経というアクションは書かれていることを短時間で理解するためには非常にいいアクティビティだと思っています。 ただし、この本は300ページ強、覚悟が必要です(笑) 書籍のまとめ購入 採用情報にも記載がある のですが、KINTOテクノロジーズは必要な書籍は会社で購入することができます。14名の同志の中でこの本を持っていない方が何名かいらっしゃったのでこの制度を利用し持っていない方の分をまとめ買いしました。 社内輪読会の進め方検討 みんなが負荷なく、いつ来てもそれなりに楽しむためにはどうしたらいいのか、それを考えて本気で悩みました。具体的なアクションは後ほど紹介しますね。 このようなことをしていたら気づいたら社内輪読会のキックオフが 2月になってしまいました。 社内輪読会キックオフ Working Agreement 僕個人としてどんな輪読会の空気を作りたいか、をまとめたものを参加者の方々に共有しました。 以下がその内容です。 本輪読会は 参加者の負荷を最小限に抑える ことを考えて設計する 本を読んでこれなかったとしてもフォローし合う 最初の 10分間、黙々読書タイムを設ける ディスカッションをメイン とし、その アウトプットを公開 することで実際に参加していないメンバーでも途中参加可能な空気を作る 毎回アウトプットをまとめて誰でも閲覧可能な状態にする (可能であれば) Zoom を録画し公開する 同じ内容を複数回やることも可能 参加者の 自由な発言を妨げない 参加者の発言に共感できること、できないことがあることを受け入れる 自由なディスカッションを活性化させる ディスカッションは最大 4人のブレイクアウトルームで実施する 少人数でディスカッションを行うことで話すことの心理的ハードルを下げ、それぞれが話したい内容を話せる状態を作るる ROM (Read Only Member) 参加を拒絶しない 状況を理解するために意思を参加メンバーに伝える 今日はちょっと話せない 周りの環境で話しづらい アウトプットは全員で積極的に作る ディスカッション内容の議事録などは手が空いている人 (発言していない人など) が積極的にログとして残す 輪読会の進め方 継続的なディスカッションを行うにはやはり一定のタイムラインがあることが望ましいと考えています。 ちょっと遅れたけど輪読会に途中から入りたい、と思っても白熱したディスカッションを行なっている時間帯だったら心理的に入ることが難しいかもしれません。 逆に今何をしているのか、ある程度わかっていれば、例えば今黙々読書タイムだから途中から入っても大丈夫、となる可能性もあります。 そのため型をバシッと決めました。 基本設計 黙々読書タイム (10分) ディスカッションタイム (30分) 内容共有 (20分) ディスカッション内容 共感したこと 共感できなかったこと KTC で実際に実践してみたいこと 次の回で実践した結果を共有してもいいかもしれない ディスカッション内容アウトプット ディスカッション中の議事は Google Slide を用意するのでそこに記載 ディスカッションタイム終了後に全員でチームの内容を共有し合う 使用するツールの選定 Gather Webミーティングツールとしては Gather を選択しました。 僕たちはディスカッションをメインとしたため、参加している人たちが気軽に話し合える人数で話していくことを考えていました。Zoom だと毎回ブレイクアウトルームを作らないといけないですし振り分けるのも大変です。 バーチャルオフィス空間である Gatherであれば、全員で集まる、少人数の部屋に移動してディスカッションができる、という自分たちのニーズにバッチリ合いました。 ただし、録画を共有する、ということには不向きなのでそこは諦めました。 代わりにしっかりとログを残して後からでも見直せるようにすることを徹底しました。 Microsoft Loop コラボレーションには Loop を選択しました。KINTOテクノロジーズは基本的に Confluenceを使用しているのですが参加者が自由にメモを書き、共同編集を行うということに対しては少し弱かったので色々と考えた結果 Confluenceと体験がそれほど変わらない、けど共同編集のストレスが少ないという理由でこちらを採用です。 おかわりの会の設定 輪読会の時間は毎週火曜日の 18:00 ~ 19:00 に設定しました。ただ、この時間だとちょっと遅いですし、突発の業務都合が入ったり、お子さんがいたりする方にとってはゴールデンタイムと重なったりすることがあります。 そんな時に一度でも参加を逃してしまうと再参加の心理的ハードルが高くなってしまいます。なので僕は翌日の水曜日、12:00 ~ 13:00 で全く同じ内容を実施することを決めました。 これによって参加逃しのリスクを下げることができますし、前日に参加してくださった方でもより深く内容を理解する時間が持てるだけでなく、別の参加者の話を聞くことで新しい視点を持てて毎回楽しく参加することができました。 生成 AI の活用 上記の Working Agreement にも書いたのですが、僕自身本を読んでこなくてもフォローし合える場を作りたい思いが強かったです。最初の 10分間で黙々タイムを設けると言っても 10分間で必要な分量を読み切ることは割と難しいです。 そこで強い味方になってくれたのが写経と ChatGPT です。写経した文字列を必要な分だけ ChatGPT で要約をすることでたとえ10分間の黙々タイムでも参加者のインプットの質は大きく変わることを実感しました。 例えば第1部を要約するとこんな感じです。およそ 12ページがこの量になると黙々タイムも効果的だと思いませんか? ![AIざっくり要約](/assets/blog/authors/_awache/20240422/AIざっくり要約.png =750x) 元の文章も Confluence にはあるので要約の中で気になったところがあったらキーワードを検索するとすぐにそのポイントを見つけることもできます。 個人的にはこれがあったからこそ最後まで走りきれたんだと思っているくらい重要な要素でした。 社内輪読会 ![輪読会の様子](/assets/blog/authors/_awache/20240422/gather.png =750x) 結果として全部で 17回の輪読会を開催しました。17回も行なった中で最後までぼっちにならずにやり切ることができました(笑) 毎回参加してくださる方もいましたし、ご自身の都合で来れる時に来ていただく、という方もいました。おかわりがあることで 1回当たりの参加人数のばらつきがあったとしても、一章あたりの参加という点ではなかなかいい数字な気がしています。 第1部 リモート組織のメリットを読み解く / 第2部 世界最先端のリモート組織並行するためのプロセス 2024-02-13 (6名) 2024-02-14 (9名) 第5章 カルチャーはバリューによって醸成される 2024-02-21 (8名) 2024-02-27 (4名) 2024-02-28 (4名) 第6章 コミュニケーションのルール 2024-03-05 (7名) 2024-03-06 (5名) 第7章 リモート組織におけるオンボーディングの重要性 / 第8章 心理的安全性の醸成 2024-03-13 (7名) 2024-03-19 (5名) 第9章 個人のパフォーマンスを引き出す/第10章 GitHub Value に基づいた人事制度 2024-03-26 (7名) 2024-03-27 (5名) 第11章 マネージャーの役割とマネジメントを支援するための仕組み & 第12章 コンディショニングを実現する 2024-04-02 (6名) 2024-04-03 (7名) 第13章 L&D を活用してパフォーマンスとエンゲージメントを向上させる & おわりに 2024-04-09 (7名) 2024-04-10 (5名) wrap up! 2024-04-16 (5名) 2024-04-17 (4名) どんなことを話したのか、についてはぜひ 「GitLabに学ぶ 世界最先端のリモート組織のつくりかた」そーだいなる輪読会 フィナーレ に参加して聞いてみてください! というのもやはりこの本は自分たちからみたら目指すべき姿が多く書かれており、現実との GAP についてどう考えているのか、という少し記事にしづらい生臭い議論がたくさんあったので、そこは当日お話しさせてください。 この輪読会を通じて得たもの/アウトプットしたもの 仲間 この輪読会に参加くださったことで初めて話した方も何人かいました、またこの輪読会を通じて参加くださった方々の思考も知ることができましたし、これからもこの繋がりを大切にしながら KINTOテクノロジーズを盛り上げていこうと思います。 弊社にはオープンに感謝を言い合える #thanks というチャンネルがあります。輪読会が終わった日に参加くださった方から温かい言葉をいただけたのも非常に嬉しかったです。 ![thanks](/assets/blog/authors/_awache/20240422/thanks.png =750x) 写経 これがあったから AI 要約だったり、輪読会のディスカッションの中で出た様々な話にも対応できたりしたので今後も輪読会をリードして行こうと思ったら重要なプロセスだと感じています。 AI 要約 生成 AI を活用して出力した要約はやはり強力です。どこに何が書いてあったか、時間の経過とともに忘れてしまうこともあるかもしれないですが、要約があればさらっと 10分眺めるだけで記憶が呼び戻されます。 マンダラチャート 自分なりにこの本のポイントをマンダラチャート方式でまとめてみました。もちろん全部やるなんて無理なのでポイントとテーマを決めて少しづつ自分のできることを増やしていきたいです。 おわりに 僕たちが行なった「** GitLabに学ぶ 世界最先端のリモート組織のつくりかた ドキュメントの活用でオフィスなしでも最大の成果を出すグローバル企業のしくみ **」の輪読会の様子はいかがでしたでしょうか? 個人的にはこれまで自分で行なってきたどの輪読会よりも満足度が高かったのでシェアをしたい気持ちが強くなったのでテックブログに書き起こさせていただきました。 本当はまだまだ書きたいこともあるのですが長くなりすぎるので一旦ここまでにします。 再告知: 「GitLabに学ぶ 世界最先端のリモート組織のつくりかた」そーだいなる輪読会 フィナーレ を開催します まだ枠が余っています。僕たちも楽しい会にできるように頑張りますので、来てもいいという方はぜひご応募ください、切実にお願いします。 Cnnpass: 「GitLabに学ぶ 世界最先端のリモート組織のつくりかた」そーだいなる輪読会 フィナーレ 開催日時: 2024-04-25(木) 18:00 ~ 21:00 (17:40 開場) 形式: オフライン 会場: KINTOテクノロジーズ株式会社(略称:KTC)室町オフィス 「** GitLabに学ぶ 世界最先端のリモート組織のつくりかた ドキュメントの活用でオフィスなしでも最大の成果を出すグローバル企業のしくみ **」の輪読会をすでにした方、現在進行形でしている方、そしてこれからしたい方向けの輪読会イベントとなっております。 イベントでもお会いできることを楽しみにしております♪ それでは!!
アバター