TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

969

ごあいさつ 皆さまこんにちは。テックブログチーム改め技術広報グループの森です。 実はこの4月より、テックブログチームは「技術広報グループ」として生まれ変わりました✨ 今後ともよろしくお願いします🙇‍♀️ 技術広報以外のお仕事は別記事で書いておりますので、もしご興味あればぜひご一読ください 👀 KINTOのグローバル展開におけるGDPR等個人データ関連法対応 GDPR対応! Cookie同意ポップアップをグローバルサイトに設置した話 導入 2024年1月31日、KINTOテクノロジーズ(KTC)では初となるの全社オフラインミーティングを開催いたしました🎉 2024年のKick Offという位置づけです。実はこのイベント、完全ボトムアップで企画・運営されました。この大規模ミーティングがどのように作られたか、本記事で裏側をご紹介します。今後のための備忘録のようなものですが、「自社で内製イベントすることになったけどどうしよう!?」という方に少しでも参考になれば何よりです。 本来ならすぐにレポートするところを、私の遅筆により約半年後の記事公開となってしまったこと、お許しください🙇‍♀️ (イベント運営の記事は鮮度が大事なのに… 😭) 企画のきっかけ コロナ禍中に弊社従業員数は爆増し、いまや約350名の社員が所属しています。 この規模になるとやはり横の繋がりや一体感を生み出すことはなかなか難しく、以前よりオフラインイベントやチームビルディングイベントを求める声が多くありました。また、トップ層からのメッセージ発信の場も多くはないので、全体ビジョンの浸透には時間を要していました。 そういった課題を踏まえ、「アフターコロナだし、全社員が集まれる機会があれば少しはこの課題もクリアになるかも」とイベント運営によく携わる3名で企画が始まりました。これが11月初旬のお話。 まずは大枠を 11月に3人で企画を開始したのですが1月開催なので実施まで3ヵ月しか期間がなく、スケジュールはかなりタイトでした。 ラフなスケジュールを以下のように引いて進めることになりました。 まずは開催すること自体に賛同を得るため、企画の大枠を以下のように検討しました。 開催目的 2023年1年の総括と2024年のキックオフ 共通のビジョンを共有すること・他部署間交流による組織の一体感醸成 企画内容 毎月の全社員ミーティング(開発編成本部会)の拡大版 前半はオンライン参加可能(業務内) 懇親会はオフライン参加のみ(業務外) コンテンツ Category Time Contents Note リハ 15:00-16:00​ 会場設営/リハーサル 音響準備や進行の調整など 16:00​-16:30​ 入場開始〜受付​ 参加者の受付 本編 16:30-16:35​ 開場〜オープニング 16:35​-16:40​ 2023年の振り返り​​(副社長) 2023年の振り返りと2024年の展望 をシェア 16:40-17:30​ 2023年の漢字​​ 2022年末にも実施しました。各グループの振り返りコーナー 17:30​-17:40​ 休憩 / プレゼン準備​​​ 17:40-18:35​ K-1グランプリ​​​​ 各部2023年の代表案件をプレゼンし、表彰! 18:35​-18:45​ 休憩​​​​​ 18:45-19:00​​ K-1グランプリ 結果発表​​​​​​ 表彰と受賞者からのコメント 19:00​-19:05​ 総括​と2024年に向けて(社長) 2023年総括と2024年への期待をシェア 懇親会 19:05-19:20​ 写真撮影 / 休憩 / 転換​ 19:20​-20:50​ 懇親会​​ ・乾杯+鏡開き ・ミニゲームも入れて全社交流の時間!​ 20:50-21:00​​ 撤収作業​​​ 21:00完全退出​ 各グループを巻き込め! 大枠が決定したので、全体の人数を把握すべく社内に公示しました。 普段の社内イベントはSlackで全社に向けて一度アナウンスすることが多いのですが、今回はなにせ全社イベント。各グループの協力なくしては統率が取れません🤦‍♀️ そこで、各グループから担当者を立てていただき、各グループの取りまとめをお願いしました。 普段は何度も何度も運営からアナウンスしないとなかなか回収しきれない回答も、各グループ担当者に取りまとめていただいたことで比較的スムーズに、〆切までに回収することができました。各G担当の皆様、本当にありがとうございました!大感謝 😭❤️ ![announce](/assets/blog/authors/M.Mori/20240611/announce.png =500x) 私の部での告知の様子 想像以上のオフライン参加率! 今回のイベントは開発編成本部会、つまり全社員ミーティングという建付けですので、基本は全員参加必須です。 家庭の都合や出張などでどうしてもオンライン参加になる方もいらっしゃいますが、それでも300名規模の会場が必要でした。 オフィス近郊での会場探しはかなり苦戦しましたが、片っ端から検索しては電話を繰り返し、奇跡的に神保町オフィスから徒歩5分の 「神田スクエアホール」 を手配することができました。 ![Hall](/assets/blog/authors/M.Mori/20240611/square_hall.jpg =500x) とってもきれいな会場。神田スクエア様、ありがとうございます。 やむを得ずオンライン参加になった方や英語通訳チャネル(後述)のため、本部会パートはWebinar配信も行いました。配信担当の方々、本当にいつもありがとう😭❤️の気持ちです。 各担当で並行してタスクを遂行! イベントを行う際は運営チームを分けてそれぞれでタスクを動かします。KINTOテクノロジーズのすごいところはアサインしたらそれぞれが自走してくれるところ…!!前のめりに動いてくれたり意見してくれたりするので、非常に助かります。 今回は前述の各G代表者の中から数名を複数の役割に分けてアサインしました。 役割 タスク詳細 統括 全体の取りまとめ、各担当者が困ったときの相談役 司会 イベント全体のファシリテーション、盛り上げ(一番重要!)の施策検討 受付 誘導の流れを検討、案内すべき事項の取りまとめ 通訳 多数所属するNon-Japaneseに向けた通訳用に外部通訳者様との調整担当 今年の漢字 各Gから2023年を表す漢字・2023年の成果と2024年への意気込みを取りまとめ K-1グランプリ 各部の代表案件を取りまとめ 社長・副社長挨拶取りまとめ 社長副社長の伝えたいメッセージとイベント趣旨をすり合わせて資料を作成 懇親会 ケータリングを何にするか+懇親会で何をするかの検討 ノベルティ 全員に配布されるノベルティや景品などの作成 司会 当日の様子はまた別の記事でお伝えできると思いますが、今回は以前からイベントの司会や盛り上げをしてくれていた3名に総合司会をお願いしました。当日のタイムラインに合わせてパートの振り分けであったり、当日の流れを想定して、いつのタイミングでどういったスライドが必要か?どう盛り上げるか?などを考えてくれました。ざっくりタイムラインはあったものの、実際に司会をするにあたって気になるポイントを洗い出したり、スクリプトを作ったり。何の依頼もしていないのに「司会お願いします」と言っただけでここまでやってくれていました。感激😭❤️ ![shinko](/assets/blog/authors/M.Mori/20240611/shikai_shinko.png =500x) 進行中の気になるポイント ![Script](/assets/blog/authors/M.Mori/20240611/shikai_script.png =500x) 司会スクリプト 受付 内部イベントとはいえこれだけ多くの人数が集まるイベントとなると、手際のよい受付が非常に重要です。受付担当としてメインで5名が手を挙げてくれました。(そして当日はたくさんの人がお手伝いしてくださいました…!!!) 受付で重要なのはなんといってもいかにスムーズに案内するか!受付でイベント参加者の第一印象が決まるため、受付に人が滞留すればするほどイベントへの不満はたまっていきます。 そこで今回工夫したのは従来の出席者リストで〇xをつけるのではなく、出席者の主体性に任せ、以下の流れで受付を行いました。 予め導線を作っておくことで、受付で停滞することなく非常にスムーズに会場へ誘導することができました。 一方で、会場までの誘導が行き届いていなかったのは反省点。次回の改善点としてメモです📝 通訳 KTCは多国籍なメンバーで構成されており、英語のほうが得意なメンバーが多数所属しています。今回は2023年の総括かつ2024年のキックオフということで経営層の大事な話も入るため、本部会本編は全コンテンツ通訳を入れることになりました。しかし、2時間半にも及ぶ本編を逐次通訳するのは素人では到底無理です🤦‍♀️ そこで、以前からオリエンテーションの通訳などでお世話になっている通訳会社様にお願いすることにしました。 🔻ZOOMでの通訳は通訳機能をONにしておくと言語チャネルを切り替えられるようになっています🔻 通訳者様が耳で日本語を聞き👂、そのまま英語チャネルで英語で発話🗣️することで、英語チャネルには英語音声が流れる仕組みです。 設定の方法はこちら👉 ミーティングまたはウェビナーでの言語通訳の使用 運営チーム内の通訳担当は現地にいない通訳者様とコミュニケーションを取り、音声・映像トラブルや会場の様子などを適宜コミュニケーションします。通訳があることで、経営層のメッセージを的確に伝えることができました。プロの通訳者様には頭が上がりません🙇‍♀️ 2023年の漢字 2022年末も実施したこの企画。各グループからマネージャーが登壇し、1年を表す漢字と総括、そして新しい1年に向けた意気込みを共有します。 事前に22グループの回答を取りまとめて当日の資料に反映させる作業を担当者にお願いしました。 忙しいマネージャー陣にお願いすることになるので、12月中旬に案内、1月19日の〆切です。 ![kanji_announce](/assets/blog/authors/M.Mori/20240611/kanji_announce.png =500x) 🔻こちらは旧テックブログチーム(現技術広報グループ)のもの。 ![kanji_blog](/assets/blog/authors/M.Mori/20240611/kanji_blog.png =700x) 🔺こんな感じでConfluenceに各グループの内容をまとめていただき、 🔻こんな感じに資料に落とし込んでいきました。 ![kanji_blog_ppt](/assets/blog/authors/M.Mori/20240611/kanji_blog_ppt.png =700x) 各グループのカラーが出ていておもしろかったのと、各グループのやっていたこと・やっていくことが知れる滅多にない機会になりました! K-1グランプリ 何といっても今回の目玉企画です。弊社では毎月「景山賞」と称して特筆すべき案件や活動を表彰しています。 👉 参考記事: 全社員ミーティングをテコ入れした話 業務の振り返りと業務価値の再認識そして部署を超えた情報共有が目的ですが、これの年度賞版をK-1グランプリと称して行うことになりました。 大まかな流れは下図の通りです。 月次賞ではプレゼンは行いませんが、今回は年度賞。プレゼン力も問われます。 グループの数が多いため、まずは各グループから案件をエントリーしてもらい、その中から各部代表案件をひとつずつ選出してもらいました。 私はプラットフォーム部の選考会に賑やかしとして参加させていただいたのですが、普段違うグループで働いている メンバーを互いに称賛しあう場 になっていたのが印象的でした。 アナウンス時や予選会、当日まで通してお伝えし続けてきたのは、K-1GPは年度賞ですが、決して優劣をつけることが目的ではないということです。 この1年、皆さんが従事してきた仕事は全て素晴らしいものであることは大前提です。 K-1GPの一番の目的は自身の業務を振り返り、お互いの仕事を称賛し合うことだったので、少なくとも私の参加したプラットフォーム部の予選会では、この 「互いに称賛し合う姿」 が見られて非常にうれしかったです。 こうして予選会で選出された代表案件は、それぞれ本部会までの1週間で各3分のプレゼン資料を準備いただき当日を迎えました。 非常にタイトなスケジュールで準備をいただくことになり、代表者の皆さんには感謝感謝です🙇‍♀️ 集まっていくプレゼン資料はそれぞれ個性に溢れていて、毎日格納される資料をワクワクして待っていました。笑 社長・副社長ごあいさつ 2024年のキックオフということで、小寺社長と景山副社長からのごあいさつも大きなコンテンツでした。 毎月の全体ミーティング直接お話を聞く機会はなく、特に小寺さんに関してはKINTO/KTC合同の場でしかお話いただくことがなかったため、非常に重要な場でした。 明確なトップメッセージを全員が聞くことで同じ方向を向いて仕事をすることができます。いわば軸のようなものです。 運営メンバーで事前に「KTCのエンジニアにどのようになってほしいか」「2024年KTCにどのようなことを求めるか」をすり合わせたり、 逆にメンバー目線で「こういったことをぜひ発信いただきたい」ということをお伝えしたりして全体構成をまとめていきました。 スライドはより伝わりやすいよう、我らがデザイナー軍団クリエイティブ室にお力添えいただきました。 外国籍メンバーにも誤解の無いような言葉を選んだり、ビジュアルで補完したり。 ![president_message](/assets/blog/authors/M.Mori/20240611/president_message.jpg =500x) 社長メッセージをビジュアル化 今回トヨタの新しいビジョン 「次の道を発明しよう」 (Inventing our path forward together) がタイミング良く発表され、こちらも改めて社長よりシェアされました。 ![toyota_message](/assets/blog/authors/M.Mori/20240611/toyota_message.jpg =500x) Inventing our path forward together 懇親会 さて、オフラインイベントの醍醐味といえば懇親会です。 今回は会場指定のケータリングを利用させていただきましたが、ロゴ入りハンバーガーや飾りつけもすることができ、とても豪華になりました ✨ ![logo_burger](/assets/blog/authors/M.Mori/20240611/logo_burger.jpg =500x) ケータリングはホワイエに用意し、本会場には何も置かなかったので、ご飯や飲み物を取りに行きにくかったのは反省点です。 さて、今回の乾杯は「鏡開き」にて行いました。 運営メンバーみんな初めての生鏡開きだったので、事前に調べたところ「バールや大きなカッターが必要」と出てきて非常に焦りました。 が、なんとそんな必要のない非常にお手軽なオリジナル樽を KURAND様のサイト [^1]で発見し、こちらを採用。 [^1]: KURAND様はこのご縁もあり、後日弊社主催のイベント 「ソースコードレビュー」まつり にご協賛いただきました。 ![kagamibiraki](/assets/blog/authors/M.Mori/20240611/kagamibiraki.jpg =500x) めちゃくちゃかわいくないですか!? このオリジナルデザインはこちらも我らがクリエイティブ室のデザインです 💯 乾杯後は基本フリーではありましたが、なんといっても260人規模です。普段会話しない人とも会話してほしいのが運営の想い。 何か話のきっかけにできるものを検討しました。 当初はチーム分けしてゲームするか?と話していましたが、大人数すぎるし、強制参加もさせたくないし...と悩んでいたところで運営が見つけたのが Rally でした。 スマホで簡単にスタンプラリーができるサービスです。QRを読み込んでスタンプラリーができるので、このQRを各部ごとに配布すれば交流ができるのでは...!?即決でした。 フリープランでもいろいろとカスタマイズでき、1週間でけっこうな完成度のものができました。 🔻Rallyの使い方はこんな感じ。 ![rally_slides](/assets/blog/authors/M.Mori/20240611/rally_slides.jpg =700x) 受付で配布したQRコードシールが各自のIDケースに貼られているので、それを読み取ってスタンプを集める形式です。 準備の手軽さとコミュニケーションの促進という意味では非常に良かったです。非常に良かった。 強制参加させることもなく、スムーズに違う部署の人に声をかけあってる姿がもはや感動的でした。 ![rally_poster](/assets/blog/authors/M.Mori/20240611/rally_poster.jpg =500x) 当日掲示したポスター ノベルティ さて、事前準備編ということでもう一つ忘れてはいけない準備物がノベルティです。 タイトなスケジュールだったため、必要なものを最初に洗いだせておらず、クリエイティブ室の皆様にはかなり無理を言ってたくさんのものを作っていただきました。。 K-1 GPロゴ 表彰状 ![idcase](/assets/blog/authors/M.Mori/20240611/design_k1_logo.png =300x) ![award](/assets/blog/authors/M.Mori/20240611/design_award.jpg =300x) スライドマスタ 鏡割り用の樽デザイン ![slidemaster](/assets/blog/authors/M.Mori/20240611/slide_master.jpg =300x) ![sakadaru](/assets/blog/authors/M.Mori/20240611/design_sakadaru.png =300x) IDカードケース(全員配布) スタッフTシャツ ![idcase](/assets/blog/authors/M.Mori/20240611/design_idcase.jpg =300x) ![staff_shirts](/assets/blog/authors/M.Mori/20240611/design_staff_t.jpg =300x) タンブラー(スタンプラリー景品) エコバッグ (スタンプラリー景品) ![tumbler](/assets/blog/authors/M.Mori/20240611/design_tumbler.jpg =300x) ![eco_bag](/assets/blog/authors/M.Mori/20240611/design_bag.jpg =300x) 改めて見ても「どんなけ作らせるねん!?」とツッコミたくなるレベルですね。笑 これに加えて社内エンジニアには各自の名札を自動で作成できるツールを作成してもらいました。 🔻Slackアイコン・部署・名前・KTCロゴが全員分印字されます。 ![Name_card](/assets/blog/authors/M.Mori/20240611/namecard.jpg =300x) 「こんなのあったらいいな」と軽く言ってみたらほんとにすぐに作ってくれました。 自社ながら、KTCメンバーの仕事の速さとクオリティの高さには毎度驚かされます。 本業がある中でもご協力いただいた方々にこの場をお借りして改めて深く感謝します 🙇‍♀️🙇‍♀️🙇‍♀️ 運営してみた学び・次回開催に向けて もう半年も経ちましたが、こうしてやったことを書き出してみると、よく準備したなぁ…笑 今回の記事執筆でこのキックオフ会をふり返ってみて、改めて「組織のビジョンや目標をわかりやすく全社に共有すること」「オフラインでチームビルディングを行うこと」の重要性を認識しました。 経営層から直接ビジョンや戦略が伝えられるだけで、その考えやダイレクションに基づいて同じ方向を向いて日々職務に従事することができます。また、この考えに共感できれば、社員のモチベーションアップにもつながります。これをオフラインで行うことにより、そのダイレクションは浸透しやすくなり、社員と経営層、さらには社員同士にも信頼関係が生まれ、疑問や不安の解消にも役立ちます。 特に弊社はKINTOサービススタートから5年経ち、会社としても次のステージに向かう最中。このタイミングでこういったイベントを行うことが、組織全体のエンゲージメント向上や、一体感の醸成に繋がるのだと実感しました✨ また別の記事などで実施結果もお伝えできると思いますが、参加者の声としても「仕事へのモチベーションが上がった」「他のチームが何をしているか、認識が強まった」「経営層の考えを知ることができた」など非常に好意的な反応が多く、実施した甲斐があったな、と思いました😄 こういったイベントはぜひ1年に一度は開催したく、次回開催に向けて運営の学びを活かし、至らない点は反省点としてさらなる改善を目指します💪 気づけば7000文字以上も書いてしまいましたが、それだけ思い入れのあったイベントだったということで。。 最後まで読んでいただきありがとうございます!KINTOテクノロジーズでは今後も社内外様々なイベントを計画中です! 社外向けイベントは 弊社Connpass にて随時募集しますので、ご興味あればぜひご参加ください 😄
アバター
Hello Hi there, my name is Murayama, and I work as an assistant at the CIO office at KINTO Technologies. This article will introduce our employees' office and desk setups in a relaxed manner (˘ω˘) Introduction to Our Offices This is our head office in Nagoya. President Kotera-san’s strong vision is reflected in the interior, emphasizing natural elements and brightness. The fire pit you can see in the bottom right picture -which is Kotera-san's particular point of focus-, is lit during certain times. It's located in the center of the office, where everyone gathers to have lunch together! The second location is the Muromachi office. Our Muromachi office located in Tokyo has two floors. It also has this area we call “the Junction”. It's a very elegant spot, also used for video and photoshoots! It's conveniently located near many shops since it's housed inside of the COREDO Muromachi 2 building. In this area, you can find whatever you want to eat! The third location is the Jimbocho office. I saw the Platform Group gathered in the big conference room so I took a picture of them. The Jimbocho office is popular because it has the largest number of conference rooms. This area offers affordable lunch options, especially there are a lot of delicious curry restaurants! I always have curry whenever I visit 🍛 The photo in the bottom right corner is a vending machine with the KINTO Technologies logo at this office. The fourth site is the Osaka Tech Lab. Not ‘office’, but ‘Tech Lab’! (This is important) It opened in April and has still few employees, but everyone there shares their opinions to improve it. The rooftop in the bottom right part of this photo is wide and popular. Lunch is also cheap around Shinsaibashi! Plus, Osaka's batter-based dishes are delicious! Although I'm from Kanto, so I’m not used to okonomiyaki set meals for lunch... Introduction to Our Desk Setups Each person personalizes their seat to work comfortably. Functional desks reflect having a good setup, I’m sure, but I don’t think its only about functionality. This is mine. I have a big cheering squad. It's a great desk setup, right?! Our vice president Kageyama-san also has some on his desk. Every once in a while, one of them rolls off somewhere and I find it heartwarming and funny to see Kageyama-san search for it. It's inevitable when you hold a Sylvanian Families figurine in your hands, it brings out your nurturing instincts. Before I make this blog all about Sylvanian Families, let's move on to the next desk around here. ![Employee commentary](/assets/blog/authors/uka/member-02.jpg =450x) Cool keyboard! She enjoys building her own PCs and Gunpla. Great hobby! In her home desk setup, she has many Gunplas watching her. She seems to have also brought a small one into the office today. I gave her a Sylvania so she has even more friends now. By now, I'm one of the Sylvanian Families evangelists in office! ![Employee commentary](/assets/blog/authors/uka/member-03.jpg =450x) I'm sharing all this informally, but please know that I also perform well at my job. There’s an e-sports club in the KINTO Technologies community, and we all played Splatoon together the other day. As I work at an IT company, I naturally (I guess?) love games as well. The recent trend in the company is playing Mahjong! ![Employee commentary](/assets/blog/authors/uka/member-04.jpg =450x) Se says she wants Doraemon's Anywhere Door and I can relate. I wish I could easily travel back and forth between the different offices... But setting wishes aside, whenever I am needed, I travel to the other offices too. Each office has its own good points and I enjoy working in all of them! ![Employee commentary](/assets/blog/authors/uka/member-05.jpg =450x) There are many people here who like books and the company has a system to lend them but It's also common to see employees lending books to each other. Also, I learned about Slack after joining KINTO Technologies. It's a wonderful application filled with cute emoji's and it allows us to communicate with each other in a nice and informal way! ![Employee commentary](/assets/blog/authors/uka/member-06.jpg =450x) This setup is super engineer-like, with its double display!! It's a wonderful desk with both functional aspects and modest comfort. By now, you should understand that Sylvanian Families are universally appealing, right? Finally Remote work is popular these days, but I think it is best to go to office and work with everyone face to face in an atmosphere that you enjoy (˘ω˘) On top of that, you are free to change your hair color, clothes, and desk setup, allowing you to work in a comfortable environment, which makes it more enjoyable! Thank you for reading till the end!
アバター
こんにちは。 DBRE チーム所属の @hoshino です DBRE(Database Reliability Engineering)チームでは、横断組織としてデータベースに関する課題解決や、組織のアジリティとガバナンスのバランスを取るためのプラットフォーム開発などを行なっております。DBRE は比較的新しい概念で、DBRE という組織がある会社も少なく、あったとしても取り組んでいる内容や考え方が異なるような、発展途上の非常に面白い領域です。 弊社における DBRE チーム発足の背景やチームの役割については「 KTC における DBRE の必要性 」というテックブログをご覧ください。 この記事では、DBREチームが運用しているリポジトリに PR-Agent を導入した際に、どのような改善が見られたかについてご紹介します。また、プロンプトを調整することで、コード以外のドキュメント(テックブログ)のレビューにも PR-Agent を活用した事例についても説明します。少しでも参考になれば幸いです。 PR-Agent とは? PR-Agent は、ソフトウェア開発プロセスを効率化し、品質向上を目指す自動化ツールです。主な目的はプルリクエスト(PR)の一次レビューを自動化し、開発者がコードレビューに費やす時間を削減することです。自動化されることで、迅速なフィードバックが提供されることも期待できます。 また、他のツールと異なる点として利用できるモデルが豊富なのも特徴です。 PR-Agent は複数の機能(コマンド)を持っており、どの機能を PR に対して適用するかを開発者が選択できます。 主な機能は以下の通りです。 Review コードの品質を評価し、問題点を指摘するレビュー機能 Describe プルリクエストの変更内容を要約し、概要を自動生成する機能 Improve プルリクエストで追加・変更されたコードの改善点を提案する機能 Ask プルリクエスト上で AI とコメント形式で対話し、PRに関する質問や疑問を解消する機能 詳しくは 公式ドキュメント をご参照ください。 なぜ PR-Agent を導入したか DBRE チームでは、AI を活用したスキーマレビューの仕組みを PoC(概念実証)として進めていました。その過程で、レビュー機能を提供するツールを以下の観点で調査しました。 インプット KTC における Database スキーマ設計のガイドラインを元にスキーマレビューすることは可能か 回答精度を向上させる目的で、LLM へのインプットをカスタマイズ(Chainや独自関数の組み込み等)できるか アプトプット レビュー結果を GitHub にアウトプットするために、LLM からのアウトプットを元に以下の条件が実現可能か PR をトリガーにレビューを実施できるか PR に対してコメントが可能か PR 上のコード(スキーマ情報)に対して生成 AI からの出力をコメント可能か コード単位で修正案を提示できるか 調査の結果、インプットの部分で要件に完全に合致するツールは見つかりませんでした。 しかし、調査を進める中で、DBRE チーム内の検証で使用した AI レビューツールの一つを実験的に導入してみようという意見が出され、最終的に PR-Agent を導入しました。 調査を行ったツールのなかで PR-Agent を導入した主な理由は以下のとおりです。 オープンソースソフトウェア(OSS)であること コストを抑えながら導入することが可能 使用できるモデルの豊富さ 様々な AI モデルに対応しており、ニーズに合わせたモデルを選択して使用できる点が魅力 導入の容易さとカスタマイズ性 導入が比較的容易で、設定やカスタマイズが柔軟に行えるため、チームの特定の要件やワークフローに合わせて最適化することが可能 今回は Amazon Bedrock を使用しています。使用した理由は以下のとおりです。 KTC は主に AWS を活用しており、スピード感を持って導入できる Bedrock でまずは試すことにした OpenAI の GPT-4 と比べて、Claude3 Sonnet を利用することで金銭的コストが 1/10 ほどに抑えられる 以上の理由から、DBRE チームのリポジトリに PR-Agent を導入しました。 PR-Agentの導入時に実施したカスタマイズ 基本的には、公式ドキュメントに記載されている手順をもとに導入しています。当記事ではカスタマイズした内容を具体的にご紹介していきます。 Amazon Bedrock Claude3 を利用 使用するモデルは Amazon Bedrock Claude3-sonnet を利用しています。 公式ドキュメント ではアクセスキーによる認証方式が推奨されていますが、社内のセキュリティ規則に準拠するという観点で、ARNによる認証方式を採用しました。 - name: Input AWS Credentials uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: ${{ secrets.AWS_ROLE_ARN_PR_REVIEW }} aws-region: ${{ secrets.AWS_REGION_PR_REVIEW }} GitHub の Wiki でプロンプトを管理 DBRE チームでは複数のリポジトリを運用しているため、プロンプトの参照元を一元管理する必要があります。また、PR-Agent 導入直後には、チームメンバーが簡単にプロンプトを編集し、プロンプトチューニングを行える環境を整えたいと考えました。 そこで検討したのが GitHub Wiki の活用です。 GitHub Wiki は変更ログが残り、誰でも手軽に変更ができるため、これを利用することでプロンプトの変更をチームメンバーが容易に行えると考えました。 PR-Agent では、describe などの各機能に対して、追加の指示を extra_instructions という項目に GitHub Actions で設定することができます。 ( 公式ドキュメント ) # configuration.toml の内容を抜粋 [pr_reviewer] # /review # extra_instructions = "" # 追加の指示を記載 [pr_description] # /describe # extra_instructions = "" [pr_code_suggestions] # /improve # extra_instructions = "" そこで、GitHub Wiki に記載されているプロンプトを、PR-Agent が設定された GitHub Actions 内で変数を通じて、追加の指示(プロンプト)を動的に加えるカスタマイズを行いました。 以下、設定手順となります。 まず、任意の GitHub アカウントで Token を発行し、GitHub Actions を使って Wiki リポジトリをクローンします。 - name: Checkout the Wiki repository uses: actions/checkout@v4 with: ref: main # 任意のブランチを指定(GitHub の default は master) repository: {repo}/{path}.wiki path: wiki token: ${{ secrets.GITHUB_TOKEN_HOGE }} 次に、Wiki の情報を環境変数に設定します。ファイルの内容を読み込み、プロンプトを環境変数に設定します。 - name: Set up Wiki Info id: wiki_info run: | set_env_var_from_file() { local var_name=$1 local file_path=$2 local prompt=$(cat "$file_path") echo "${var_name}<<EOF" >> $GITHUB_ENV echo "prompt" >> $GITHUB_ENV echo "EOF" >> $GITHUB_ENV } set_env_var_from_file "REVIEW_PROMPT" "./wiki/pr-agent-review-prompt.md" set_env_var_from_file "DESCRIBE_PROMPT" "./wiki/pr-agent-describe-prompt.md" set_env_var_from_file "IMPROVE_PROMPT" "./wiki/pr-agent-improve-prompt.md" 最後に、PR-Agent のアクションステップを設定します。各種プロンプトの内容を環境変数から読み込みます。 - name: PR Agent action step id: pragent uses: Codium-ai/pr-agent@main env: # model settings CONFIG.MODEL: bedrock/anthropic.claude-3-sonnet-20240229-v1:0 CONFIG.MODEL_TURBO: bedrock/anthropic.claude-3-sonnet-20240229-v1:0 CONFIG.FALLBACK_MODEL: bedrock/anthropic.claude-v2:1 LITELLM.DROP_PARAMS: true GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} AWS.BEDROCK_REGION: us-west-2 # PR_AGENT settings (/review) PR_REVIEWER.extra_instructions: | ${{env.REVIEW_PROMPT}} # PR_DESCRIPTION settings (/describe) PR_DESCRIPTION.extra_instructions: | ${{env.DESCRIBE_PROMPT}} # PR_CODE_SUGGESTIONS settings (/improve) PR_CODE_SUGGESTIONS.extra_instructions: | ${{env.IMPROVE_PROMPT}} 以上の手順で、Wiki 上に記載されているプロンプトを PR-Agent に渡し、実行することが可能となります。 レビュー対象をテックブログへ拡張するために実施したこと 弊社のテックブログは Git リポジトリで管理されています。そのため、PR-Agent を利用してブログ記事も同様にレビューできないかという意見がありました。 通常、PR-Agent はコードレビューに特化したツールです。試しにブログ記事をレビューしてみたところ、Describe および Review 機能はある程度機能しましたが、Improve 機能はプロンプト(extra_instructions)を調整しても「No code suggestions found for PR.」と回答されてしまいます。(コードのレビューを目的に開発されたツールのため、このような挙動になった可能性が考えられます) そこで、 Improve 機能の システムプロンプト をカスタマイズすることでレビューが可能かを検証したところ、生成AIからの回答が返ってきたため、システムプロンプト側もカスタマイズすることにしました。 システムプロンプト とは、LLM を Invoke する際に、ユーザープロンプトとは別に渡されるプロンプトのことを指します。アウトプットする項目やフォーマットの具体的な指示なども含みます。 先程ご説明した extra_instructions はシステムプロンプトの一部であり、PR-Agent ではユーザーからの追加指示が存在する場合、その指示がシステムプロンプトに追加で組み込まれる仕組みになっているようです。 # Improve のシステムプロンプト抜粋 [pr_code_suggestions_prompt] system="""You are PR-Reviewer, a language model that specializes in suggesting ways to improve for a Pull Request (PR) code. Your task is to provide meaningful and actionable code suggestions, to improve the new code presented in a PR diff. 〜省略〜 {%- if extra_instructions %} Extra instructions from the user, that should be taken into account with high priority: ====== {{ extra_instructions }} # ここに extra_instructions で指定した内容が追記される。 ====== {%- endif %} 〜省略〜 PR-Agent は extra_instructions と同様にシステムプロンプトを GitHub Actions から編集できることができます。 既存のシステムプロンプトをカスタマイズすることで、最終的にコードだけでなく文章もレビューできるようになりました。 以下、カスタマイズ例の一部をご紹介します。 まず、コードに特化した指示をテックブログをレビューできるように変更していきます。 カスタマイズ前のシステムプロンプト You are PR-Reviewer, a language model that specializes in suggesting ways to improve for a Pull Request (PR) code. Your task is to provide meaningful and actionable code suggestions, to improve the new code presented in a PR diff. # 日本語訳 # あなたは PR-Reviewer で、Pull Request (PR) のコードを改善する方法を提案することに特化した言語モデルです。 # あなたのタスクは、PR diffで提示された新しいコードを改善するために、有意義で実行可能なコード提案を提供することです。 カスタマイズ後のシステムプロンプト You are a reviewer for an IT company’s tech blog. Your role is to review the contents of .md files in terms of the following. Please check each item indicated as a check point of view and point out any problems. # 日本語訳 # あなたはIT企業の技術ブログのレビュアーです。 # あなたの役割は、.mdファイルの内容を以下の観点からレビューすることです。 # チェックポイントとして示されている各項目を確認し、問題があれば指摘してください。 次に、具体的な指示が記載されている部分をテックブログをレビューできるように変更していきます。 アウトプットに関する指示を変えてしまうとプログラム側にも影響してしまうため、あくまでコードのレビュー指示をテキストに置き換えてテックブログをレビューできるようにカスタマイズをしています。 カスタマイズ前のシステムプロンプト Specific instructions for generating code suggestions: - Provide up to {{ num_code_suggestions }} code suggestions. The suggestions should be diverse and insightful. - The suggestions should focus on ways to improve the new code in the PR, meaning focusing on lines from '__new hunk__' sections, starting with '+'. Use the '__old hunk__' sections to understand the context of the code changes. - Prioritize suggestions that address possible issues, major problems, and bugs in the PR code. - Don't suggest to add docstring, type hints, or comments, or to remove unused imports. - Suggestions should not repeat code already present in the '__new hunk__' sections. - Provide the exact line numbers range (inclusive) for each suggestion. Use the line numbers from the '__new hunk__' sections. - When quoting variables or names from the code, use backticks (`) instead of single quote ('). - Take into account that you are reviewing a PR code diff, and that the entire codebase is not available for you as context. Hence, avoid suggestions that might conflict with unseen parts of the codebase. カスタマイズ後のシステムプロンプト Specific instructions for generating text suggestions: - Provide up to {{ num_code_suggestions }} text suggestions. The suggestions should be diverse and insightful. - The suggestions should focus on ways to improve the new text in the PR, meaning focusing on lines from '__new hunk__' sections, starting with '+'. Use the '__old hunk__' sections to understand the context of the code changes. - Prioritize suggestions that address possible issues, major problems, and bugs in the PR text. - Don't suggest to add docstring, type hints, or comments, or to remove unused imports. - Suggestions should not repeat text already present in the '__new hunk__' sections. - Provide the exact line numbers range (inclusive) for each suggestion. Use the line numbers from the '__new hunk__' sections. - When quoting variables or names from the text, use backticks (`) instead of single quote ('). その後、先程ご説明した「Wikiでプロンプトを管理」の手順と同様に、新たにシステムプロンプト用の Wiki を追加します。 - name: Set up Wiki Info id: wiki_info run: | set_env_var_from_file() { local var_name=$1 local file_path=$2 local prompt=$(cat "$file_path") echo "${var_name}<<EOF" >> $GITHUB_ENV echo "prompt" >> $GITHUB_ENV echo "EOF" >> $GITHUB_ENV } set_env_var_from_file "REVIEW_PROMPT" "./wiki/pr-agent-review-prompt.md" set_env_var_from_file "DESCRIBE_PROMPT" "./wiki/pr-agent-describe-prompt.md" set_env_var_from_file "IMPROVE_PROMPT" "./wiki/pr-agent-improve-prompt.md" + set_env_var_from_file "IMPROVE_SYSTEM_PROMPT" "./wiki/pr-agent-improve-system-prompt.md" - name: PR Agent action step 〜 省略 〜 + PR_CODE_SUGGESTIONS_PROMPT.system: | + ${{env.IMPROVE_SYSTEM_PROMPT}} 以上の手順でカスタマイズすることで、通常はコードレビューに特化した PR-Agent の Improve 機能をブログ記事のレビューにも対応させることができました。 注意点として、システムプロンプトを変更しても必ずしも 100% 期待通りの回答が返ってくるわけではありません。これは、プログラムコードに対して Improve 機能を使用した場合も同様です。 PR-Agent を導入した結果 PR-Agent を導入することで、以下のような点でメリットがありました。 レビュー精度の向上 普段見落としがちな内容も指摘してくれるので、コードレビューの精度が向上しました クローズされた過去の PR もレビューできるため、過去のコードを見直すことができます 過去の PR に対してレビューを行うことで、継続的な品質向上やコードベースの改善にもつながります プルリクエスト(PR)作成の負荷軽減 プルリクエストの要約機能により、プルリクエストの作成負担が軽減しました 要約された内容をレビュアーが確認することで、レビューの効率が向上し、マージまでの時間が短縮しました エンジニアスキルの向上 技術の進歩は非常に素早く、普段の業務をしつつキャッチアップし続けることは難しいものです AI によって提供された指摘はベストプラクティスを学ぶのに非常に効果的でした テックブログのレビュー テックブログにPR-Agentを導入することで、レビューの負荷が軽減されました。完璧ではないものの記事の誤字脱字や文法のチェックや内容の一貫性や論理の整合性についても指摘してくれるので、見落としがちなミスも発見できます 以下に、実際のテックブログ( イベントレポート DBRE Summit 2023 )をレビューをした例となります。 ![pr_agent_describe.png](/assets/blog/authors/mhoshino/pr_agent_describe_blog.png =800x) PR-Agentによるテックブログのプルリクエスト(PR)要約(Describe) ![pr_agent_describe.png](/assets/blog/authors/mhoshino/pr_agent_review_blog_01.png =800x) ![pr_agent_describe.png](/assets/blog/authors/mhoshino/pr_agent_review_blog_02.png =800x) PR-Agentによるテックブログのプルリクエスト(PR)レビュー(Review) ![pr_agent_describe.png](/assets/blog/authors/mhoshino/pr_agent_improve_blog.png =800x) PR-Agentによるテックブログの変更案(Improve) また、注意点として以下の点から最終的な判断は人間が行うことが重要です。 全く同じプルリクエスト(PR)に対して PR-Agent が行うレビュー結果が毎回異なり、回答精度のばらつきがある PR-Agent によるレビューが、関連性が低いまたは完全に見当違いなフィードバックを生成する場合がある まとめ 本記事では、PR-Agent の導入とカスタマイズがどのように作業効率を向上させたかについてご紹介しました。完全なレビュー自動化はまだ実現できませんが、設定とカスタマイズにより、補助的な役割として開発チームの生産性向上に貢献しています。 今後もこの PR-Agent を活用して、さらなる効率化と生産性の向上を目指していきたいと思います。
アバター
Introduction My name is Kang and I am in charge of front-end development of the KINTO ONE New Vehicle subscription system at KINTO Technologies. Allow me to briefly introduce you the project I was assigned to. The KINTO ONE New Vehicle Subscription System is gradually incorporating the application of new architecture. The front-end team uses Next.js, TypeScript, and Atomic Design as the design pattern. In this article, we will introduce "Atomic Design", a methodology we are applying in our projects. What is Atomic Design? The Definition of Atomic Design Atomic Design is a UI design methodology created by Brad Frost, which provides a framework for developing UI designs by breaking them down into components. To maximize code reusability, the idea is to define which would be your smallest building blocks (“Atoms”) and create higher-level components based on them. In recent years, there have been an increase in the number of cases where JavaScript is used for front-end web development, with Vue and React serving as frameworks and libraries. Vue and React are known for their component-based development approach. As a result, Atomic Design, which emphasizes designing systems in a component-centric way, is gaining even more attention. The benefits of Atomic Design Atomic Design facilitates the creation of a system that enhances component reusability by dividing them into stages: Increases component reusability. Components can be develop and tested separately from applications. (Separate libraries such as Storybook or Jest allow you to check and test each component.) CSS is tightly bound to specific components, making it easier to manage CSS. Reusing existing components ensures a consistent design. The disadvantages of Atomic Design The need to design in order to enhance component reusability could create complexity in some cases due to pre-page element verification and the proliferation of configuration components: It becomes difficult to proceed without designing highly reusable components. Modifying components frequently can become complex and difficult to maintain. Experiments and discoveries A component structure for front-end development Separating Presentational and Container Components the Presentational component is responsible for configuring the appearance of the screen, and the Container component is responsible for executing API calls and front-side logic. const MypageContainer: React.FC<Props> = ({ setErrorRequest }) => { const { statusForCancellation, cancellationOfferDate, callGetCancellationStatus } = useCancellationStatus(); const { authorizationInfo, memberType } = useAuth(); useEffect(() => { ... }, []); useEffect(() => { ... }, [currentItem]); const initSet = async () => { try { // Data settings required for API calls or rendering ... } finally { ... } }; const onChangeCurrentItem = (currentSlideNumber: number) => { // Event logic processing ... }; /** * Render judgment is made with passed Props, set state and flag, etc. * Render the component that follows the judgment. * Each component is assembled into a Presentational component */ return isLoading ? ( <Loading /> ) : ( <div className="p-mypage"> <div className="l-container"> <div className="l-content--center"> <MypageEstimateInfo data={estimateInfo} getEstimateInfo={getEstimateInfo} setErrorRequest={setErrorRequest} /> </div> <div className="l-content--center"> <MypageContactContent data={entryInfo} memberType={memberType} /> </div> <ErrorModal isOpen={errorDialogRequest.isOpen} errorDialogRequest={errorDialogRequest.error} onClose={() => setErrorDialogRequest({ ...errorDialogRequest, isOpen: false })} /> </div> </div> ); }; export default MypageContainer; const MypageContactContent: React.FC<Props> = ({ data, isContactForm = true, memberType }) => { return ( <> <div className="o-contactContent"> <ContentsHeadLine label="inquiries" /> // --> atom {isContactForm && <ContactAddressWithFormLink />} // --> molecules <TypographyH4> Enquire by phone </TypographyH4> // --> atom <ContactAddressWithForAccident // --> molecules tel={CONTACT_ADDRESS_TEL.member} isShowForAccident={memberType === MEMBER_TYPE.MEMBER} /> </div> </> ); }; export { MypageContactContent }; You can see that the container component above determines the configuration of the component based on the value obtained from the API and the value set after logic processing. The rendered component then builds the presentational component to display the screen via the value passed from container to Props . Component Group Configuration (Atoms, Molecules, Organisms, Template, Pages) Atom Atom is the most basic and indivisible component. Atoms can be combined to create bigger units for their usage, such as Molecules and Organisms. 2.Molecules Molecules are the combination of multiple atoms and have their particular characteristics. The important thing about a Molecule is that it will only serve one purpose. Organisms Organisms are more complex than the previous component hierarchy, having clear areas where they will appear in a service, with their specific context. Compared to Atoms or Molecules, due to its context, it will have less reusability as it is more specific. Template Template pages can be created by combining multiple Organisms and Molecules. They are essentially wireframes in which the actual components are placed and structured in as layout. Pages Pages are where the content that users can see is populated. You could call it the instances of Templates. Synergy with Storybook Storybook is an open source UI tool. With Storybook, you can quickly visualize the UI components you are building. Integration with the Storybook library makes UI management easier (UI testing will become easier to perform too). Summary Today I shared my experiences when implementing Atomic Design in my project. While some concepts were initially unclear when applying these principles in practice, we adjusted the scope and classification of component groupings to better fit our project (for example, components that were expected to be out-of-scope of Organism were managed in a separate unit we called Features.) Without a clear definition from the start, components may need to be redesigned, recreated, or reclassified mid-process, requiring careful attention (or adding more component hierarchies could be another option). I also found that collaboration and communication between the design and development teams were very important (because designs needs to be broken down into components such as Atom, Molecule, or Organism.) The alignment of understanding regarding the criteria for each component grouping is essential. It will be essential for all sides to have alignment meetings together in order to ensure that everyone is on the same page, as each team typically concentrates on their individual roles. Atomic Design has its own set of pros and cons, but I believe that if the entire team understands it and defines it well before implementation, it will be easier later to create a frontend development environment that facilitates smooth collaboration and maintenance. Thank you for reading. Reference atomic-web-design Brad Frost design systems are for user interfaces
アバター
はじめに こんにちは!KTCでAndroidエンジニアをしている 長谷川 です! 普段はmy routeというアプリの開発をしています。my routeのAndroidチームのメンバーが書いた他の記事も是非読んで見てください! Android開発をする時に知っておかないとバグを引き起こしそうな「地域別の設定」について SwiftUI in Compose Multiplatform of KMP 本記事ではKotlin(Android)でOG情報を取得する方法と、その過程で文字コードの扱いに困った話を紹介します。 この記事で解説すること OGPとは KotlinでOGPを取得する方法 OGPで取得した情報が文字化けする原因 文字化けの対応方法 OGPとは OGPとは「Open Graph Protocol」の略で、Webページなどを他のサービスにシェアしたときに、Webページのタイトルやイメージ画像を正しく伝えるためのHTML要素です。 OGPが設定されているWebページはこれらの情報を表すmetaタグが存在します。以下はその中の一部を抜粋したmetaタグです。OG情報を取得したいサービスはこれらのmetaタグから情報を読み込むことができます。 <meta property="og:title" content="ページのタイトル" /> <meta property="og:description" content="ページの説明文" /> <meta property="og:image" content="サムネイル画像のURL" /> KotlinでOGPを取得する方法 今回は通信のためにOkHttp、HTTPのパースのためにJsoupを使用します。 まずはOkHttpを使って、OG情報を取得したいURLのWebページにアクセスします。エラーハンドリングは要件によって変わりますので省略します。 val client = OkHttpClient.Builder().build() val request = Request.Builder().apply { url("OG情報を取得したいURL") }.build() client.newCall(request).enqueue( object : okhttp3.Callback { override fun onFailure(call: okhttp3.Call, e: java.io.IOException) {} override fun onResponse(call: okhttp3.Call, response: okhttp3.Response) { parseOgTag(response.body) } }, ) 次にJsoupを使って中身をパースします。 private fun parseOgTag(body: ResponseBody?): Map<String, String> { val html = body?.string() ?: "" val doc = Jsoup.parse(html) val ogTags = mutableMapOf<String, String>() val metaTags = doc.select("meta[property^=og:]") for (tag in metaTags) { val property = tag.attr("property") val content = tag.attr("content") val matchResult = Regex("og:(.*)").find(property) val ogType = matchResult?.groupValues?.getOrNull(1) if (ogType != null && !content.isNullOrBlank()) { ogTags[ogType] = content } } return ogTags } これでogTagsに必要なOG情報が入りました。 OGPで取得した情報が文字化けする原因 ここまでで大抵のWebページのOG情報は正しく取得できると思います。しかし一部のWebページの場合、文字化けが発生してしまう可能性があります。ここではその原因を解説します。 今回は下記のように string() という関数を呼びました。 val html = response.body?.string() ?: "" この関数は以下の優先順位で文字コードを選択します。 BOM(Byte Order Mark)の情報 レスポンスヘッダーのcharset 1,2に指定がなければUTF-8 詳しくは OkHttpのリポジトリのコメント に記載があります。 はい、つまりBOMの情報がなくて、レスポンスヘッダーのcharsetの指定がなくて、Shift_JISなどUTF-8以外でエンコードされているWebページがあったらどうなると思いますか? ... 文字化けが発生します。なぜならデフォルトのUTF-8でデコードしてしまうからです。 さて、どうしましょうか?次のセクションでは具体的な対応方法を解説します。 文字化けの対応方法 前のセクションで文字化けしてしまう原因が分かりました。実はWebページにおいて文字コードは下記のようにHTML内にも指定されている可能性があります。BOMの情報もなくて、レスポンスヘッダーのcharsetも指定されていない場合はこの情報を使用するしかありません。 <meta charset="UTF-8"> <!-- HTML5 --> <meta http-equiv="content-type" content="text/html; charset=Shift_JIS"> <!-- HTML5より前 --> しかし上記の文字コードが指定されたmetaタグを読み込むために、HTMLを文字コードに応じてパースする必要があるという矛盾が発生します。 と一瞬思いますが、例えばUTF-8やShift_JISはASCII文字の範囲では互換性があるため、一旦UTF-8でデコードしても問題ありません。 (この方法だとパースを2回行うことがあります。もしmetaタグのバイト配列をあらかじめ調べておけばパースする前に文字コードを判定することもできるかもしれませんが、今回はコードの分かりやすさを重視しました。) というわけで下記のようなコードを書くことができます。 /** * レスポンスボディからJsoupのDocumentを取得する * レスポンスボディのcharsetがUTF-8以外の場合は、charsetを取得して再度パースする */ private fun getDocument(body: ResponseBody?): Document { val byte = body?.bytes() ?: byteArrayOf() // ResponseHeaderにcharsetが指定されている場合、そのcharsetでデコードする val headerCharset = body?.contentType()?.charset() val html = String(byte, headerCharset ?: Charsets.UTF_8) val doc = Jsoup.parse(html) // headerCharsetが指定されている場合、そのcharsetで正しくパースできているはずなので // そのままreturnします。 if (headerCharset != null) { return doc } // HTML内のmetaタグからcharsetを取得します。 // このcharsetがない場合は、文字コードが不明なので、UTF-8でパースされたdocを返します。 val charsetName = extractCharsetFromMetaTag(html) ?: return doc val metaCharset = try { Charset.forName(charsetName) } catch (e: IllegalCharsetNameException) { Timber.w(e) return doc } // metaタグで指定されたcharsetとUTF-8が異なる場合、metaタグで指定されたcharsetで再度パースする // パースは比較的重たい処理なので、二重で行わないようにします。 return if (metaCharset != Charsets.UTF_8) { Jsoup.parse(String(byte, metaCharset)) } else { doc } } /** * HTMLのmetaタグからcharsetの文字列を取得する * * HTTP5未満 → meta[http-equiv=content-type] * HTTP5以上 → meta[charset] * * @return charsetの文字列 ex) "UTF-8", "SHIFT_JIS" * @return charsetが見つからない場合はnull */ private fun extractCharsetFromMetaTag(html: String): String? { val doc = Jsoup.parse(html) val metaTags = doc.select("meta[http-equiv=content-type], meta[charset]") for (metaTag in metaTags) { if (metaTag.hasAttr("charset")) { return metaTag.attr("charset") } val content = metaTag.attr("content") if (content.contains("charset=")) { return content.substringAfter("charset=").split(";")[0].trim() } } return null } その後JsoupのDocumentを作成する関数を、今作成した処理を使って以下のように変更しましょう。 - val html = body?.string() ?: "" - val doc = Jsoup.parse(html) + val doc = getDocument(body) おわりに お疲れ様でした。 大抵のWebページの文字コードはUTF-8ですし、仮に異なる文字コードを使用しているとしてもBOMやレスポンスヘッダーにcharsetが指定されていることがほとんどです。したがって今回のような問題が発生することはあまりないと思います。 しかし、仮にそのようなサイトを発見してしまった場合、原因の把握や修正方法が難しい場合があります。 本記事がどなたかの助けになれば幸いです。
アバター
Introduction Hello, Morino from KINTO Technologies CSIRT here. I participated in the Japan Ceasert Association's TRANSITS Workshop in Summer 2023, which ran for three days from July 12 2023 (Wed) to 14 (Fri). TRANSITS provides training content from Europe on the establishment and operation of CSIRT. In this workshop, I learned about the four modules: Organization, Operations, Technology and Law. CSIRT stands for Computer Security Incident Response Team, referring to a team that responds to computer security incidents. Computer security incidents include leakage of confidential information, unauthorized intrusion into computer systems, and malware infections, etc. Organization Module In the Organizational Module, I learned about the role of the CSIRT, the services it provides, and the structure, etc. of its team. There was also an incident scenario exercise in which each team played the roles of a CSIRT member, an attacker, etc. In this exercise, I experienced the flow of incident response and the importance of communication. Operations Module In the Operations module, I learned about incident response and incident handling. "Incident response" refers to addressing incidents such as analysis and containment of incidents, etc. whereas "incident handling" refers to the overall response to incidents. There was also an exercise for each team to examine the incident handling process. This exercise taught me about the importance of preparing an incident response procedure in advance. Technology Module In the technical module, I learned about the attackers' techniques and methods, etc. During the lecture, there was a talk from a security vendor who is involved in the analysis of incidents that occur in various organizations. In almost every incident in which the security vendor was involved in, they claimed that the attacks could have been detected if properly monitored. I was also impressed by the following words, which were described as the foundation of security. Close doors after you have opened them Tidy up after you If you start a system, always put maintenance measures in place Law module In the law module, I learned about cybersecurity laws and regulations. The legal requirements and precautions for capturing and storing logs were specifically discussed in detail. Along with an introduction to the eDiscovery system, a security service provider also explained how to cooperate with the police. Summary The TRANSITS Workshop in Summer 2023 was an unforgettable experience. I was able to deepen my knowledge and skills related to CSIRT through the lectures I attended. Furthermore, participating in exercises allowed me to interact with fellow participants, enriching the overall experience. I highly recommend this workshop for those who are establishing or operating a CSIRT.
アバター
こんにちは、人事グループのHOKAです。 (過去には 人事採用グループをつくろう ~3年で社員数300人に急成長した組織の裏側~ という記事も書いていますのでそちらもぜひご覧ください) 2023年度も残り3日という2024年3月28日。 大阪、名古屋、東京で働くKINTOテクノロジーズ開発支援部メンバー40人が東京渋谷のGoogleオフィスに集まり、10X Innovation Culture Program を実施したのでそのレポートをしてみます。 10X Innovation Culture Program とは Google 日本法人Japanが2023年9月に公開 した「イノベーションを生み出す組織環境づくりのためのリーダーシップ・プログラム」です。 内容としてはオンライントレーニング、アセスメントツール、ソリューションパッケージの3つの要素で構成されており、 オンライントレーニングで「Think 10X」の概要を知り アセスメントツールで自分たちの立ち位置を知り、課題を把握し ソリューションパッケージで課題解決に役立てる と自然な流れで組織変革に役立つ考え方や知見を自分たちの組織に取り入れることができるプログラムとなっています。 きっかけ 弊社DBREの あわっち はJagu'e'rにある分科会の一つである「 企業カルチャーとイノベーションを考える分科会 」の運営メンバーの一人として活動しており、10X Innovation Culture Programにも強い関心を持っていました。 このプログラムが公開されたことをきっかけに、あわっちが社内の有志を集ってGoogleオフィスでこのプログラムをライトに体験してみるという企画をしていたのでそこに私が参加したのがことのはじまりです。 あまりにも楽しかったし学びが多かったので、翌日の朝会で共有したら、「それ、もっと全体でやろうよ!」というマネージャー、さらにそれを聞いた部長の「開発支援部だけなら私の判断で開催できるでしょ!」というテンションで、あれよあれよという間に開催が決定しました。 社長や副社長に承認を貰う前から「やる」ということだけは決まっているこの社風、このスピード感、嫌いじゃないです。 ここからどうやって開発支援部全体という40名を超える大きな規模で実施するか、試行錯誤が始まりました(笑) 実施までの道のり はじめは弊社メンバーだけで実施しようと考えていたのですが、それだとどうしても開発支援部以外の部署に展開することが難しくなってしまいます。 「10X Innovation Culture」を実現するためには自分たちがそれをしっかりと理解して、そして自信を持って語れる様にならなければなりません。 そこで、今回は実際のプログラムの進め方を学び、そしてこれを今後社内に展開するために必要な「ファシリテーター」を育てることを前提とし、GoogleオフィスでGoogle社員による10X Culture Programを実施することにしました。 開発支援部内で事前にファシリテーター希望者アンケートを行ったところ、人事以外のメンバーも職種、男女問わず17人のメンバーが「ファシリテーター希望」と答えてくれました。(希望者は、ファシリテーターになるつもりで10X Culture Program に参加してもらいました。) 全体の座組が決まったところでGoogleの2名、あわっち、HOKAが主となってコンテンツの企画をスタート。2023年10月に受講した体験を活かし、よりディスカッションに集中できるよう、「動画を見ておく、アセスメント回答を実施しておく」を事前に行うことにしました。 事前準備会 at オンライン 6つの動画を見る アセスメント回答をする 10X innovation culture program at Google office アセスメント結果から開発支援部の傾向を読み解く ディスカッションを2つ実施する 事前準備会やってみた (3月20日) あわっちも初めての事前準備会の準備を始めました。ところが、 提供されたアセスメントツールが期待通りに動かない! あわっちが力技で解決 英語対応済のアセスメントがない! 英語翻訳を社内のスペシャリストに依頼 英語の動画がない! YouTubeの翻訳ツールを活用 などなど、色んな課題が発生し、社内外の方に助けていただきました。 KINTOテクノロジーズの社員のうち、24%は外国籍です。改めて、英語対応の重要性を感じたシーンでした。 事前準備会では、あわっちがファシリテーターを担当しました。 10X Innovation Culture Program の動画 を1つずつ見る→アセスメントに回答するを繰り返していきました。 事前準備会の終わりには、アセスメントツールであるLooker Studioを介してその場で結果を共有しました。開発支援部全体の傾向や、グループごとの傾向が分かり、参加者はより前のめりになっていただけました。 ![アセスメント結果](/assets/blog/authors/hoka/20240611/assessment_result.png =750x) 当日 (3月28日) ついに3/28当日。東京、大阪、名古屋から総勢40名がGoogleオフィスに集結。普段なかなか入れないGoogleオフィスでの開催、気分は完全にお上りさん状態です(笑) ![Googleオフィスに集合](/assets/blog/authors/hoka/20240611/arriving.png =750x) 当日ファシリテーションをしてくださったのはGoogleのRikaさんとKotaさん。 オープニングではそもそも10X Innovation Culture Programにある「10X」とはなんなのか、Googleの事例を交えながら説明いただきました。 ![実施風景](/assets/blog/authors/hoka/20240611/state.png =750x) みんな真剣に聞き、メモをとっていますね。 そしていよいよディスカッションがスタート。このディスカッションでは限られた時間を最大限活かすため、事前準備のアセスメント結果の中で特に伸び代のあった「内発的動機づけ」と「リスクテイク」について5人程度のグループに分かれてそれぞれに対してディスカッションをしました。 「内発的動機づけ」では、「日々の業務にパッションを持って取り組むためには何が必要か」「社内でそれをどう実現できるか」などの視点でディスカッションを行いました。 一方の「リスクテイク」では、「新しいことにチャレンジする際の心理的ハードルを下げるには」「失敗を許容する風土をつくるにはどうすべきか」などについて意見を出し合いました。 ここで各グループをリードするのがファシリテーターの役割です。全員が適度に話せる様に、個別の内容だけにフォーカスしすぎない様に適切に発散させる様に、グループを回すことがこのカルチャーセッションのディスカッションでは求められます。 Googleから全体に提示されたワークショップを行うにあたってのAgreementは下記の内容でした。 大前提 学習のための機会と捉える 間違えて当たり前と割り切る 注意事項 自分の言葉が周りに与える影響を意識する 出された意見は善意に基づくものと解釈する 他の人が発言した内容を口外しない Let's Enjoy Google Culture! ニックネームで呼び合いましょう これらは実際にグループワークを円滑に活発に行うために非常に重要なポイントでした。 実際のディスカッションの様子はこちら ![ディスカッション1](/assets/blog/authors/hoka/20240611/discussion1.png =750x) ![ディスカッション2](/assets/blog/authors/hoka/20240611/discussion2.png =750x) ![ディスカッション3](/assets/blog/authors/hoka/20240611/discussion3.png =750x) ![ディスカッション4](/assets/blog/authors/hoka/20240611/discussion4.png =750x) ![ディスカッション5](/assets/blog/authors/hoka/20240611/discussion5.png =750x) ![ディスカッション6](/assets/blog/authors/hoka/20240611/discussion6.png =750x) 会を通じて終始非常に盛り上がり、同時に自分たちの課題、そしてそれらを改善するためにどの様なことができるのか、を学び、感じることができました。 実際のアンケート結果を共有します。 ![アンケート結果1](/assets/blog/authors/hoka/20240611/survey1.png =750x) ![アンケート結果2](/assets/blog/authors/hoka/20240611/survey2.png =750x) ![アンケート結果3](/assets/blog/authors/hoka/20240611/survey3.png =750x) Googleのお二人より嬉しいコメントもいただきました! Rikaさん お疲れ様でした!この盛り上がりは、このスペースに入られている皆様による事前準備があったからこそなので、改めてこちらからも御礼を伝えさせてください!ワークショップでの意欲的なご参加者様から、我々も元気をいただきました💪貴社のカルチャー変革を進めていかれる姿は、他企業様にも影響を与えると思います! Kotaさん こちらこそ貴重な機会を頂きありがとうございました!皆様の熱量に圧倒されました。Kinto様のカルチャー発展が次のステージに進むきっかけになれば良いなと思います。応援しています!我々もここから更に色々な取り組みに発展していけると良いなと思っていますのでよろしくお願いいたします😃 後日談 ちょうど3月末、ということもありKINTOテクノロジーズでは目標設定の時期と重なっていました。このプログラムを受けたことで方々から「 それって10Xじゃないよね? 」という声が聞こえてきて、自分たちの組織がどうありたいか、一人一人がこれまで以上に真剣に考える様になったことを実感しています。 また、Googleで実践していることで有名な「20%ルール」をまずは開発支援部で導入してみようということにもなりました。これまで「Googleだからできるんだよね」というイメージがあることでなかなか踏み出せなかったこの制度も、実際に自分たちでこのプログラムを体験したことで「自分たちでもできるかも」という様にマインドの変化が生まれました。 そして、3か月後の6月末(もうすぐですね)にも開発支援部で開催が決定。今度は自分たちで運営します。 また、開発支援部以外の部門でも導入すべく、現在準備を進めています。 ファシリテーターとしての活動が期待されます。 いかがでしたでしょうか? 企業カルチャーに課題を感じている、もっと会社をよくしたい、そう感じた方はぜひ一度 10X Innovation Culture Program を見てみてください。きっと良い気付きを得ることができると思います。 お知らせ Google Cloud Next Tokyo '24 登壇決定👏👏 2024年8月2日、** Google Cloud Next Tokyo '24 ** で 10X Innovation Culture Program 体験ワークショップの事例紹介として弊社開発支援部部長の岸と 10X を社内で推進しているあわっちが登壇します。 私たちがこの体験を通じて感じたことをありのままにお話ししますのでお時間がある方はぜひお立ち寄りください。
アバター
Introduction Hello! My name is Ren.M from KINTO Technologies' Project Promotion Group. My main role is to develop the front-end of KINTO ONE (Used Vehicles) . This time, we'd like to introduce the basics of TypeScript, specifically the type definitions. Target audience of this article Those who want to learn about TypeScript type definitions Those who want to learn TypeScript following their JavaScript studies What is TypeScript? TypeScript is a language that operates as an extension of JavaScript, so both TypeScript and JavaScript use the same syntax. In traditional JavaScript, there's no obligation to specify data types, allowing for more flexible coding. However, improving program reliability and mitigate issues such as inconsistent typing was required so we sought a solution. Enter TypeScript, leveraging static typing to address these concerns. By understanding type definitions you will be able to code smoothly and ensure safer data transfer. How TypeScript differs from JavaScript JavaScript allows for different assignments of the following data types: let value = 1; value = "Hello"; However, in TypeScript, the behavior is as follows: let value = 1; // It is not a number type so it cannot be assigned value = "Hello"; // Can be assigned because it is the same number type value = 2; The main type of data // string type const name: string = "Taro"; // number type const age: number = 1; // boolean type const flg: boolean = true; // array string type const array: string[] = ["apple", "banana", "grape"]; Explicitly defining a type after : is called a type annotation. Type inference As mentioned above, TypeScript automatically assigns types without using type annotations. This is called type inference. let name = "Taro"; // string type // Bad: name cannot be assigned as it is a type of string name = 1; // Good: Can be assigned because it is a type of string name = "Ken"; Array definition type // an array that accepts only a number type const arrayA: number[] = [1, 2, 3]; // an array that accepts only number or string types const arrayB: (number | string)[] = [1, 2, "Foobar"]; interface The type definition of an object can be an interface . interface PROFILE { name: string; age?: number; } const personA: PROFILE = { name: "Taro", age: 22 }; Similar to the age mentioned above, with a "?" following the key elementyou can also make the property arbitrary through a grant. // 'age' element is not required const personB: PROFILE = { name: "Kenji", }; Intersection Types The concatenation of multiple types is called Intersection Types. The following applies to STAFF . type PROFILE = { name: string; age: number; }; type JOB = { office: string; category: string; }; type STAFF = PROFILE & JOB; const personA: STAFF = { name: "Jiro", age: 29 office: "Tokyo", category: "Engineer", }; Union Types More than one type can be defined using | (pipe). let value: string | null = "text"; // Good value = "kinto"; // Good value = null; // Bad value = 1; In the event of arrays let arrayUni: (number | null)[]; // Good arrayUni = [1, 2, null]; // Bad arrayUni = [1, 2, "kinto"]; Literal Types Assignable values can also be explicitly typed. let fruits: "apple" | "banana" | "grape"; // Good fruits = "apple"; // Bad fruits = "melon"; typeof If you want to inherit a type from a declared variable, use typeof. let message: string = "Hello"; // inherits the string type from the message let newMessage: typeof message = "Hello World"; // Bad newMessage = 1; keyof keyof is used to create a type from the property name (key) from the type of an object. type KEYS = { first: string; second: string; }; let value: keyof KEYS; // Good value = "first"; value = "second"; // Bad value = "third"; enum Enum (enumeration type) is a function that automatically assigns a consecutive number. The following assigns 0 to SOCCER and 1 to BASEBALL . enum improves readability and makes maintenance easier. enum SPORTS { SOCCER, BASEBALL, } interface STUDENT { name: string; club: SPORTS; } // 1 is assigned to club const studentA: STUDENT = { name: "Ken", club: SPORTS.BASEBALL, }; Generics By using Generics, you can declare a type each time you use it. This is useful for situations where you want to repeat the same code in different types. Conventionally, T and similar syntax are often used. interface GEN<T> { msg: T; } // declare the type of T when used const genA: GEN<string> = { msg: "Hello" }; const genB: GEN<number> = { msg: 2 }; // Bad const genC: GEN<number> = { msg: "message" }; When you define a default type, declarations such as <string> are optional. interface GEN<T = string> { msg: T; } const genA: GEN = { msg: "Hello" }; You can also use extends together to restrict the types that can be used. interface GEN<T extends string | number> { msg: T; } // Good const genA: GEN<string> = { msg: "Hello" }; // Good const genB: GEN<number> = { msg: 2 }; // Bad const genC: GEN<boolean> = { msg: true }: For use with functions function func<T>(value: T) { return value; } func<string>("Hello"); // <number> is not required func(1); // multiple types are allowed func<string | null>(null); In the event that extends is used in a function function func<T extends string>(value: T) { return value; } // Good func<string>("Hello"); // Bad func<number>(123); When used in conjunction with the interface interface Props { name: string; } function func<T extends Props>(value: T) { return value; } // Good func({ name: "Taro" }); // Bad func({ name: 123 }); Conclusion In this article, we covered the fundamentals of TypeScript. What are your thoughts? TypeScript's usage in front-end development has been on the rise, and I believe that learning and using TypeScript can prevent data type inconsistencies and facilitate safer development with fewer bugs. I hope this article was helpful! There are many other articles on the tech blog, so please check them out!
アバター
Introduction I am Kanaya, a member of the team[^1] that develops payment platforms used by multiple services at KINTO Technologies. Today I will be introducing a case of remote mob programming for a new project, highlighting its role in achieving timely development. [^1]: For other initiatives of payment platforms, please see Domain-Driven Design (DDD) incorporated in a payment platform intended to allow global expansion . Background We started a project to create a new internal payment system. The project team included a product owner located in Tokyo and three software developers: one based in Tokyo and two in Osaka, myself included. In order to aim for quick development and cost efficiency, especially AWS costs, we opted for React powered SPA based on AWS Amplify for the frontend, while using the AWS Serverless Application Model to create essential APIs. Challenges At the beginning of the project, I felt a bit concerned , especially about using AWS services that were new to me. When I discussed my concerns with each team member, I found out that the three of us, who had gathered together, had different areas of technical expertise. Specifically, when divided into three areas frontend, backend, and infrastructure (AWS), we found out that we were good at two areas but lacked confidence in one. For example, in the case of K-san and myself, we're proficient in frontend and backend development, but lack confidence in AWS. Thankfully, we were reassured by the fact that both of us brought significant technical expertise to the table, which meant the workload wouldn't fall solely on one person. Team Member Frontend Backend Infrastructure (AWS) K-san 😊 😊 😐 T-san 😊 😐 😊 N-san 😐 😊 😊 Chart displaying skill areas and the corresponding areas of expertise of the development team members Also, the biggest concern I learned during our conversations was that there was the lack of common understanding when developing as a team. For example, with the frontend alone, what should be the granularity of the components? At which level of detail should state management be performed? Should we go with a promise-based approach or with async/await? How much testing should we write? Since there was no single line of code at the beginning, every single decision had to be made from scratch. Our team consisted of members who had joined the company less than a year ago, so we lacked having common points of reference to draw upon. So we thought that building a common understanding had to come first. To sum it up, we identified the following two major challenges to the success of the project: Given the lack of a shared understanding at the project's beginning, our priority was to establish common ground (to minimize future misunderstandings and ensure the creation of a high-quality product). The need to raise the base level of our technical capabilities by sharing and complementing each other's areas of expertise. In order to solve the above two issues, we decided to adopt mob programming as a development approach for this project. What is Mob Programming? Mob programming (which I will referred to as "MobPro" from now on) is described in the book Mob Programming Best Practices as "three or more people sitting in front of one computer working together to solve a problem." Everyone participates using one PC (screen), but there are actually two roles. Since it involves three or more participants, decisions and instructions are reached through discussions with at least two of the team members or ‘mobs’, while a designated 'typist' apply their instructions into code. Typist (operates the PC and writes code) Mob (discusses and directs development direction) I think the concept of resource efficiency and flow efficiency are important in understanding MobPro where three or more people are involved in a single task. You can find more details on Flow Efficiency and Resource Efficiency #xpjug - SlideShare (in Japanese), but MobPro is a way of working that fully focuses on workflow efficiency. While our primary focus isn't only about maximizing workflow efficiency, our work style naturally centers around it as long as we continue to use MobPro. Trial and Error/ Measures Taken Due to geographical constraints, we conducted mob programming remotely via Zoom. We knew it was necessary to solve the issues in order to make the project a success, so we incorporated MobPro with the following measures. Increase the amount of information To increase the amount of information, we worked on the screen while sharing the typist's entire desktop. The shared window alone does not tell us what is happening outside the window. The reason for sharing the entire desktop was to share the use of the OS and tools as well. Next, the typist made an effort to vocalize thoughts while working as much as possible. The intention is also to check for any discrepancies, especially since only the typist can actually work on the output. I noticed that the typist was often included in the discussions. Although different from the original MobPro typist's role, building a common understanding is important, so we made valid the typist's participation in discussions! Time management Since we knew that it would be difficult for more than three people to get together spontaneously for MobPro, we tried to schedule dedicated times in our calendars. Scheduling MobPro sessions in advance helped establish a coding day´s rhythm more effectively. We also added the Zoom Timer app to set a time limit. Since MobPro involves focused work, it is easy to become fatigued if continued without breaks. To include adequate shifts and breaks, we used a timer for time management. According to books and case studies of other companies[^2], it seemed that rotation times are very short, with 10-minute shifts, but we decided on 30 minutes partly due to the following task unit separation. [^2]: https://techblog.yahoo.co.jp/entry/2020052730002064/ Further subdivide feature tickets and leave as TODO comments For the features subject to MobPro, tasks were subdivided in advance and left as TODO comments in the core source code to be modified. Leaving TODO comments had two benefits. The first one is that it helps refine the goal of the feature. The second, that it creates good opportunities to create closures and wrap up tasks, allowing the MobPro role to be smoothly passed on to the next person in charge. For example, when creating a new API to update user information for payment processing, the following TODO comments were written before writing the code. At this point, the work content and work order were clear, allowing us to proceed smoothly with role rotation and breaks. # TODO : add definition of user information update to openapi # TODO : infrastructure - create an interface to update user information # TODO : infrastructure - process implementation of user information update # TODO : application - validation process implementation # TODO : application - process implementation of user information update # TODO : add lambda definition to sam template # TODO : deployment and operation check def lambda_handler(event, context): pass A relative criterion to determine when to use MobPro? Are we going to develop everything with MobPro? This question arose at an early stage, but we decided to prioritize features that seem difficult or required discussion to be implemented by MobPro. While relative difficulty among the features to be tackled serves as one way to select features for mob programming during the sprint, we also found that our shared awareness of what we considered difficult was well-aligned and it made sense to everyone to proceed this way. We also balanced our time coding the simpler features outside of our MobPro schedule. Since the difficult features were taken care of with MobPro, we experienced a surge in pull requests for simple features once MobPro was finished. Applying MobPro to tasks other than programming The term "mob programming" does not imply that it must be used only for that purpose. For this reason, we made the decision to utilize it for purposes beyond programming. Here are two examples: In code reviews Code reviews were also sometimes done with MobPro. Since we didn't do everything with MobPro, there was inevitably code not seen by everyone in the development process. At the beginning of each MobPro session, we set aside some time to explain the code created outside of MobPro, allowing everyone to listen and ask questions. Thanks to this dedicated time, we were able to quickly go through the review process, which often becomes the bottleneck. Tasks related to operation We also used our MobPro time for operational tasks. Specifically, when setting up the Cognito user pool and configuring GitHub Actions settings for the first time, we made use of our MobPro time to work on it collaboratively, with everyone watching. This also allowed us to get an idea of the operational aspects of the project. Ensuring that all team members communicate with the product owner Although we now have a common understanding within the development team, excluding the product owner poses a risk of misalignment, and indeed, some misunderstandings did occur. In the beginning, the product owner and I had a lot of conversations, yet the decision-making process remained vague without keeping particular minutes, leading to a gap in our understanding. As a countermeasure, we made sure that everyone participated in communication with the product owner, and that minutes were kept in real time. Since then, gaps in understanding have decreased, and development could move forward with less rework. At the same time, we created opportunities for team members to travel to each others offices, increased opportunities for important decision-making and do offline MobPro. Offline MobPro Picture Establishing a Sprint Zero (not directly related to MobPro) We requested the product owner for a Sprint Zero, and we had a one-week Sprint Zero period. In Sprint Zero, we prepared to be able to produce a lot of output at Sprint 1. I particularly focused on the following two things. Create a repository where create-react-app works at a minimum Prepare the deployment destination and GitHub Actions so create-react-app can be deployed In other words, we first created a situation where you can check the operation with one click in the development environment. The ability of continuous deployment allows for better and quicker feedback. By creating a continuous deployment mechanism early on, anyone can deploy the latest code to the development environment at any time. On the contrary, some people went beyond the scope of Sprint 0 and completed Sprint 1 features, which was amazing. Although not directly related to MobPro, it is recommended that a sprint zero be established for any new development project. For more information on Sprint Zero and its various preparations, see [Document Published] Best Practices for Starting a Scrum Project | Ryuzee.com . Analyze Results and Try For the Next Results We were able to complete the development on-schedule while meeting the quality required by the product owner. We were also pleased to hear that they had never experienced a project that was completed on schedule with such high quality. We also had the person in charge of the business department, who is the user of the system, actually test the screen. We got their feedback, reflect it accordingly, and in the end, the user evaluated the system as “very easy to use”. Now, our development team is able to run independently in areas where we are not good at, and we are moving in the direction of further demonstrating our strengths. Analysis for future cases I believe that we have achieved very good results as a project, but considering the reasons for its high reproducibility, I can think of the following points: By aligning common understandings at an early stage, we were able to improve quality and reduce setbacks By creating a foundation for development with a sprint zero, work efficiency was high from the start Through MobPro, we communicated frequently, and relationships were built at an early stage Our early preparation I think, including communicating frequently from early stages and the establishment of a robust development infrastructure, was what led us to success. We were almost expecting MobPro to impact our schedule due to how much it focuses on flow efficiency, but it did not. We believe the reason for this is that we separated the difficult tasks, handled by MobPro, while the easy ones were handled individually. In hindsight, I believe it was beneficial to focus on the bottlenecks in the development process. Try for the next First of all, speaking for ourselves, when we encounter a problem that we cannot solve on our own, we hope to use MobPro with someone who can help us solve the issues. I think this will positively impact both our technical knowledge and teamwork. Also, I would like to try MobPro on a larger scale project. I think that it will be difficult to fully concentrate on improving flow efficiency, but I'm looking forward to see the benefits of establishing a common understanding early on. Summary MobPro is for maximizing flow efficiency, but it was also very useful for unifying common understandings that were not in place. By clarifying where to use MobPro so that we could concentrate on the difficult implementations, we were able to develop the system without causing delays in the project. KINTO Technologies has offices in Tokyo and Osaka. To know more about the people working here, or the type of open positions, please see Job List . We look forward to hearing from you!
アバター
はじめに モバイルアプリ開発グループでアシスタントマネージャーをしているK.Kaneです。 普段は my route PJでPLをしたり、iOSエンジニアをしたりしております。 先日、my route iOSアプリを経路アプリとして、iPhoneにプリインストールされているマップアプリ(以下、標準マップ)から起動できるようにしたリリースを行いました。 実は地図アプリなどを中心に経路アプリ設定されているアプリは結構あるのですが、この辺りの設定方法について日本語で説明しているサイトは意外となさそうでしたので、今回記事にしてみました。 ShareExtensionとの違い iOSにおいて他アプリとデータを伴った連携を行う場合、ShareExtensionを利用するのが一般的です。ShareExtensionを利用すると、他のアプリが対応したファーマットに合致するデータを共有メニュー経由で連携しようとすると、共有先のアプリ一覧に候補として表示されるようになります。 標準マップの場合、スポット検索の結果は「 https://maps.app/com/? 」で始まるユニバーサルリンクとなっており、通常の共有メニューを利用して共有されますので、対応するShareExtensionを用意すれば受け取ることができます。 一方、経路の共有は出発地、目的地データの共有となるため、通常の共有メニューは表示されず、経路アプリとして設定されたアプリにのみ共有することができます。 経路アプリとして設定する方法 経路アプリとしての設定は、Capabilityの追加のみで、ShareExtensionのようなロジックの実装は不要です(後述しますが、受け取ったデータに対する処理については実装が必要です)。 プロジェクトにおいてアプリのTARGETSを選び、「Signing & Capabilities」を選んで、左上の「+Capability」を選択します。 ※画像のXcodeバージョン:15.4 表示された別ウインドウ上でMapsと入力し、表示された2つのうち、「iOS, watchOS」のほうをダブルクリックし、追加します。 「Signing & Capabilities」の下の方に「Maps」が追加されているため、経路を受け取る交通手段を選択します。 以上の設定を行うことで経路アプリの一覧に表示されるようになります。 なお、こちらの設定後のInfo.plistには以下の画像の項目が追加されます。 受け取ったデータの抽出方法 次に経路アプリの一覧経由で起動された際に渡されるデータの受け取り方法について説明します。 受け取りはSceneDelegateクラスで行います。 SceneDelegateクラスがプロジェクト内に存在しない場合は追加してください。ここでは詳しくは触れませんが、追加したクラスをInfo.plistの「Delegate Class Name」にて指定するか、SwiftUIアプリの場合はApp継承のstruct内で@UIApplicationDelegateAdaptorで指定したAppDelegateクラス内から設定することもできます。 import UIKit import MapKit class SceneDelegate: UIResponder, UIWindowSceneDelegate { func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) { if let context = connectionOptions.urlContexts.first, MKDirections.Request.isDirectionsRequest(context.url) { printMKDirectionsData(context.url) } } func scene(_ scene: UIScene, openURLContexts URLContexts: Set<UIOpenURLContext>) { if let url = URLContexts.first?.url, MKDirections.Request.isDirectionsRequest(url) { printMKDirectionsData(url) } } private func printMKDirectionsData(_ url: URL?) { guard let url else { return } let request = MKDirections.Request(contentsOf: url) if let source = request.source, let destination = request.destination { if !source.isCurrentLocation { print("dep_lat: " + String(source.placemark.coordinate.latitude)) print("dep_lng: " + String(source.placemark.coordinate.longitude)) print("dep_name: " + (source.name ?? "")) } if !destination.isCurrentLocation { print("des_lat: " + String(destination.placemark.coordinate.latitude)) print("des_lng: " + String(destination.placemark.coordinate.longitude)) print("des_name: " + (destination.name ?? "")) } } } } 標準マップからはMKDirections.Requestクラスのデータとして渡されます。 MKDirections.Requestクラスの詳細は こちら を参照していただければと思いますが、departureDateなど他にもプロパティはあるのですが、標準マップから渡されるデータには設定されていないようです。 今回は便宜上データの抽出までをSceneDelegateクラス内で行なっていますが、実際にはそのデータを利用する画面にデータを渡して処理する形になるかと思います。 Apple審査時の追加対応 経路アプリとしてApple審査に出す場合は、App Store Connectの配信タブにある「ルーティングアプリカバレッジファイル」にそのアプリがサポートするエリアを示すgeojsonファイルを追加する必要があります。geojsonファイルの詳細については「ルーティングアプリカバレッジファイル」の項目横の?ボタンを押して表示されるリンク先のページを確認してください。 正しいフォーマットのgeojsonファイルを登録すると以下のようにファイル名が表示されますので、この状態になれば審査に出すことができます。 無事リリース。。? 実際にmy route iOSアプリは以上の設定を行った時点で審査に出し、無事承認されましたのでこの状態で公開しました。 公開したアプリでも無事に連携することができましたので、「以上が経路アプリとしてデータを受け取る方法です。」と言って終わりたいところではありますが、実は後日談がございます。。 見慣れない警告が表示されるようになっている。。 つい先日Privacy Manifestsの対応のため、App Store Connectにアプリをアップロードすると送られてくるメールを確認すると、Privacy Manifestsとは関係ない警告が出ているではありませんか! 以下が実際に送られてきたメールでの警告内容です。 それぞれの警告にあるとおり、Info.plistに以下の設定を追加で行う必要がありました。 MKDirectionsRequestの設定にHandler rankの設定 Supports opening documents in placeの設定 設定後のInfo.plistの内容は以下の画像のようになります。 これらの追加後、上記の警告は出なくなりました。 おわりに my route iOSアプリでは標準マップから出発地、目的地を受け取って、改めてmy route内でそのルートを検索するために使っています。 地図アプリは他のアプリも似たような利用方法が多い印象ですが、地図機能を持たないアプリでも経路アプリとして登録すれば、標準マップとの連携によって出発地、目的地のデータを使うことができますので、アプリの可能性が広がるかもしれないですね。 経路アプリ対応してみようと思っている方にこちらの記事が参考になれば幸いです。
アバター
はじめに こんにちは、 KINTO FACTORY のフロントエンド開発を担当しているきーゆのです。 KINTO FACTORY では、専用のマガジンページを立ち上げるべくヘッドレス CMS の 1 つ Strapi を利用することになりました。 ※この辺りの詳細については、別途記事が投稿されるので楽しみにしていてください! :::message Strapi とは? ヘッドレス CMS でフロントエンドの拡張性が高い 導入コストが低く、標準でコンテンツ取得系の API が提供される OSS のため必要に応じて API の追加や拡張ができる ::: 今回は Strapi 導入時に対応することになった「Strapi にカスタム API を追加する方法」について解説したいと思います。 本記事では、以下の 2 パターンのカスタム API 実装について記載しています。 :::message カスタム API 実装パターンと想定されるユースケース 新規のカスタム API を実装する 複数の collectionType(コンテンツの定義) からエントリを取得して返却したい デフォルト API ではカバーしきれないようなビジネスロジックの実行結果を返却したい デフォルトの API をオーバーライドする エントリ詳細の取得を自動割り当ての postId からカスタムの UID に変更したい ::: Web ページの管理効率化は永遠の課題ですので、本記事をきっかけにエンジニアの血涙が少しでも減らせたらと願うばかりです。 環境情報 Strapi version : Strapi 4 node version : v20.11.0 新規のカスタム API を実装する 本項では新規のカスタム API を実装する方法を紹介します。SQL レベルで実装ができるためカスタマイズ性は高いですが、やり過ぎるとメンテが大変なのでご利用は計画的に。 1. router を作成する 最初に作成する API のエンドポイントに合わせて routes を追加します。 src/api 配下に任意の collectionType ごとのディレクトリ(下図では post )があり、その下に routes ディレクトリがあります。routes 配下に custom-route 定義用のファイルを作成してください。 ※公式によると、必要なファイルを用意してくれる npx strapi generate コマンドがあるみたいです(私は使ってない) 作成したファイルに、以下のようなコードを記述します。 export default { routes: [ { method: "GET", // HTTPメソッドを指します。用途に合わせて適宜変更してください。 path: "/posts/customapi/:value", // 実装するAPIのエンドポイントです。 handler: "post.customapi", // このrouteが参照するcontrollerを指定します。 } }; method HTTP メソッドを指定します。作成する API に合わせて、適宜変更してください。 path 実装するカスタム API のエンドポイントを指定します。 サンプルのエンドポイントの /:value は末尾の値を value 変数として受け取ることを示しています。 Ex) /posts/customapi/1 や /posts/customapi/2 で叩かれた場合、value にはそれぞれ 1,2 が格納されます。 handler 実装するカスタム API が参照する controller(後述)を指定します。 参照したい controller の function 名を指定してください。 2. controller を実装する 1 で実装した routes が参照する controller を実装します。 routes ディレクトリと同階層の controllers ディレクトリ内の post.ts を開きます。 以下のような形で、デフォルトの controller (CoreController) に、前項の routes で指定した handler( customapi )を追加します。 変更前(初期状態) import { factories } from '@strapi/strapi'; export default factories.createCoreController('api::post.post'); 変更後 import { factories } from "@strapi/strapi"; export default factories.createCoreController("api::post.post", ({ strapi }) => ({ async customapi(ctx) { try { await this.validateQuery(ctx); const entity = await strapi.service("api::post.post").customapi(ctx); const sanitizedEntity = await this.sanitizeOutput(entity, ctx); return this.transformResponse(sanitizedEntity); } catch (err) { ctx.body = err; } }, })); 変更内容 デフォルト controller にカスタム handler customapi() を追加 8 行目にて customapi() のビジネスロジックが格納された service customapi() の実行結果を取得 :::message 本項ではビジネスロジックを service レイヤーに移動していますが、controller 内にビジネスロジックを実装することも可能です(再利用性や可読性に応じて定義するレイヤーは変更してください)。 ::: validateQuery(), sanitizeOutput(), transformResponse() の詳細は触れないので、気になる方は Strapi の 公式ドキュメント を参照ください。 3. service を実装する 2 で実装した controller が参照する service を実装します。 controllers ディレクトリと同階層の services ディレクトリ内の post.ts を開きます。 以下のような形で、デフォルトの service (CoreService) に、前項の controller で指定した method (customapi) を追加します。 変更前(初期状態) import { factories } from '@strapi/strapi'; export default factories.createCoreService('api::post.post'); 変更後 import { factories } from "@strapi/strapi"; export default factories.createCoreService("api::post.post", ({ strapi }) => ({ async customapi(ctx) { try { const queryParameter: { storeCode: string[]; userName: string } = ctx.query; const { parameterValue } = ctx.params; const sql = "/** 利用するDB、目的に合わせたSQL */"; const [allEntries] = await strapi.db.connection.raw(sql); return allEntries; } catch (err) { return err; } }, })); 変更内容 デフォルト service にカスタム service customapi() を追加 6 行目にてクエリパラメータの情報を取得 7 行目にてエンドポイントのパラメータ情報を取得 10 行目にて SQL 実行結果を取得 :::message 今回は直接 SQL を実行するために strapi.db.connection.raw(sql) を実装していますが、strapi では他の取得方法も用意されています。 他の取得方法は 公式ドキュメント が参考になります。 ::: 4. 動作確認 以上で新規のカスタム API の実装は完了です。 実際に API を叩いてみて、想定通りの挙動になっているか確認してください。 デフォルトの API をオーバーライドする 本項では、デフォルトで作成されるエントリ詳細取得 API を、オーバーライドにより任意のパラメータで取得できるようにした例を紹介します。 【エントリ詳細取得 API】 [オーバーライド前] GET /{collectionType}/:postId(number) [オーバーライド後] GET /{collectionType}/:contentId(string) 1. router を作成する 新規のカスタム API 実装時と基本的に同じになります。 routes 配下に作成した custom.ts に以下のコードを追記します。 export default { routes: [ { method: "GET", path: "/posts/:contentId", handler: "post.findOne", } }; この route 追加により、 /posts/:postId(number) でエントリ詳細を取得していたエンドポイントが /posts/:contentId(string) でエントリ詳細を取得するようになります( /posts/:postId(number) 経由ではエントリ詳細が取得できなくなります)。 2. controller を実装する controller の実装も新規のカスタム API 実装時と基本的に同じになります。 routes ディレクトリと同階層の controllers ディレクトリ内の post.ts を以下のように変更します。 変更前(初期状態) import { factories } from '@strapi/strapi'; export default factories.createCoreController('api::post.post'); 変更後 import { factories } from "@strapi/strapi"; import getPopulateQueryValue from "../../utils/getPopulateQueryValue"; export default factories.createCoreController("api::post.post", ({ strapi }) => ({ async findOne(ctx) { await this.validateQuery(ctx); const { contentId } = ctx.params; const { populate } = ctx.query; const entity = await strapi.query("api::post.post").findOne({ where: { contentID: contentId }, ...(populate && { populate: getPopulateQueryValue(populate), }), }); const sanitizedEntity = await this.sanitizeOutput(entity, ctx); return this.transformResponse(sanitizedEntity); }, })); 変更内容 デフォルト controller にカスタム controller findOne() を追加 12 行目にて contentID カラムが contentId に一致するレコードを抽出 11 行目にて .findOne() を利用しているため、取得結果は 1 つのオブジェクトになる :::message 13~15 行目はデフォルト API で提供されている populate パラメータ適用に準拠した処理です。 mediaLibrary から動画や画像を取得したい場合は、 populate の付与は必須になるので注意してください。 ::: 本項では、ビジネスロジックを service ではなく controller に実装しています。 3. 動作確認 以上でデフォルト提供している API をオーバーライドする実装は完了です。 実際に API を叩いてみて、想定通りの挙動になっているか確認してください。 まとめ これにて Strapi でのカスタム API 実装の解説は以上になります。 Strapi はカスタマイズ性が高く良いツールだと思います。 そのため今後もナレッジをシェアできればと思いますし、皆様のナレッジもぜひシェアしていただけると嬉しいです。 まだ他にも、 Strapi の記事公開時に自動でアプリケーションをビルドする CKEditor に動画(.mp4)を埋め込めるようにする といったネタがありますが、また別の機会に。 読んでいただき、ありがとうございました。
アバター
Introduction Hello, I am yuki.n. I join the company in January this year! I interviewed those who joined in December 2023 and in January this year about their immediate impressions of the company! I hope this content will be useful for those who are interested in KINTO Technologies, and serve as a reflection for the members who participated in the interview. Hoshino Self-introduction My name is Hoshino, and I joined the company as Deputy General Manager of the Mobility Product Development Division, a newly established division in January. I have been working to create and operate services from a technical perspective. How is your team structured? There are 4 teams: (1) in charge of in-house media, (2) in charge of incubation projects, (3) in charge of tool development for dealers, and (4) in charge of tool planning for dealers. As of February 2024, we have 23 members, mostly software developers, but we also have producers, directors, and designers. We are a team with the capability to run a business holistically. What was your first impression of KINTO Technologies when you joined? Were there any surprises? Yes, indeed! I think it is wonderful that the company provides not only explanations of the divisions, but also a full orientation that includes explanations of the business flow, vision, and medium- to long-term plans, so that mid-career employees can move in the same direction. What is the atmosphere like on site? Despite the wide age range of the members, who are in their 20s to 40s, everyone seems to be in harmony with each other. I initially assumed that many of the members had been with the company for a long time, but a lot of employees had been here for only six months or less. I felt the company's openness to welcoming new people. Work styles are diverse, and remote work seems to be more frequent than in other divisions. I think this team is ideal for those seeking challenges, thanks to the diverse backgrounds of its members. If you are interested, please contact our HR! How did you feel about writing a blog post? I think it is a very good initiative as organizations capable of sharing information will gain a competitive edge in recruitment. [Question from Romie] I feel like hitting roadblocks in the early stages of launching and running a service can pose significant challenges for recovery down the line. What do you think are the crucial aspects and important mindsets one shouldn't overlook when starting out? It is important to understand that services truly begin when customers start using them and their value begins from that moment onward. And that they also require continuous nurturing. Taking the above into consideration and put simply: ‘aim to establish operations that are sustainable over time’. However, since new services may not be fully adopted from launch, I think it is important to discern what are its core requirements to be maintained first and start small. I think that once a service starts, the most important thing is to avoid discontinuation to the users, rather than any troubleshooting. As for its sustainability/ continuity, establishing a strong relationship with the product owner would be beneficial. Choi Self-introduction I'm Choi from the New Vehicle Subscription Development Group within the KINTO ONE Development Division. I joined the company in December. I have been working in frontend and backend development for various web services. How is your team structured? As a content development team, we have nine members including myself. Most of them are frontend engineers. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I felt the system was well-organized, due to the comprehensive orientation provided upon joining the company. I was impressed by the company's blend of characteristics from both a major company and an IT startup. What struck me the most was how experienced engineers were and how they keep exploring and studying new technologies. What is the atmosphere like on site? There were many things I didn't understand during my first month after joining the company, but everyone on the team was kind and helpful in answering any work-related questions. The Osaka office where I work is still a small group of about 30 people, and we can communicate well with people from other divisions. Once a month, we hold lightning talks with office members at our “info-sharing meeting” and we also share ideas to improve our office environment. How did you feel about writing a blog post? I was a little worried because I am not good at writing Japanese, but I think it went well as I was able to reflect on my past two months. [Question from Hoshino] Please tell us if there is any app that you thought "This is excellent!" as a frontend engineer. The pace of technological advancement in frontend development seems fast these days. Many sites are also user-friendly in terms of UI/UX. While I don't have a particular app that I think is the best, I have experience in backend and app development as well as frontend development, and from this perspective, I've recently been interested in Flutter and React Native, which allow me to create without platform restrictions. It has been a few years since they were released, but when I first started developing apps, I had to create Android, iOS, and web apps separately, so eliminating that workload has been a huge help to me as an engineer! YI Self-introduction I am YI from the Operation System Development Group in Project Development Division. My previous job was being a systems integrator (SIer), mainly engaged in B2B system implementation projects for various development projects regardless of industry, as well as frontend and backend. Currently, I am developing a system to handle back office operations related to KINTO ONE used vehicles. How is your team structured? There are 5 people as a used car system team, and about 10 other service providers. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I was surprised that the purchase of expensive software licenses proceeded with only Kageyama-san's (our VPs) approval via Slack, and it was ready for use the next day." What is the atmosphere like on site? I have the impression that there are many people in my age group with diverse backgrounds. How did you feel about writing a blog post? Actually, I had been reading the Tech Blog before joining the company, so I knew about this project somehow, but when I came to write it myself, I thought, "Is it really my turn now?!"That’s how I felt. [Question from Choi] What activities would you like to do outside of work within the company (hobbies, sports, etc.)? I played tennis in high school so I'd like to play with the members of "ktc-tennis club," and also join the activities from the "Golf club" and "Car club" channels in our Slack. I find it really valuable to be able to build connections “horizontally” with colleagues who aren't directly involved in my daily work. So I am looking forward to participate in different activities! HaKo Self-introduction I am HaKo from the Analysis Produce Team, Data Analysis Group. I've worked as a researcher and analyst for research companies and retail companies. I find it interesting to know how people use services and what goes through their mind when they do. How is your team structured? We are a team of nine, including my manager and me. We are formed by teams that were subdivided into smaller teams that were then consolidated into a single one. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I have often worked in environments with older age groups, so I was moved by the lack of rigid "protocols". What is the atmosphere like on site? I feel that everyone has their own specialties and areas of expertise and is very inspiring to see. How did you feel about writing a blog post? It's my first time writing a blog post, but it reminded me of the days when I used to keep a diary on mixi, long time ago. [Question from YI] What has changed since joining KINTO Technologies? There were many projects that I took over soon after joining the company. They are more centered around technical tasks such as creating the email newsletter distribution system, rather than focusing on sales promotion planning or analysis, which had been my main focus until then. yuki.n Self-introduction I'm yuki.n from the New Vehicle Subscription Development Group in the KINTO ONE Development Division. I joined the company in January this year as a frontend engineer. I was assigned to Osaka. I would be happy to be involved in a diverse range of tasks, not only limited to frontend. How is your team structured? As a newly established team, we currently are four including myself, comprising both internal and external members. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I was surprised at how solid the company is in many areas, such as the orientation, company rules, and so on. It was a very new experience for me, partly because it was rare in my past. What is the atmosphere like on site? I gives me a sense of comfort and tranquility in a positive way. All other team members are in Tokyo except myself, but I feel no particular communication barriers. I feel comfortable interacting with them. I am also grateful that I am allowed to do things quite freely, such as being given the chance to try out my own initiatives. How did you feel about writing a blog post? This is my first time writing a blog post for work so I was nervous, but I thought it was a great initiative. [Question from HaKo] Please tell us what surprised or impressed you when you joined KINTO Technologies. It overlaps with what I mentioned before, but even though I just joined the company, I am pleased that the team have accepted my concepts and "what I want to do." I was surprised and impressed at the same time. Kiyuno Self-introduction I am Kiyuno from the Project Promotion Division, Project Development Division. I was assigned to the frontend development of KINTO FACTORY. I work at the Muromachi Office. How is your team structured? We are six, including myself, all working on frontend development. I want to keep the title of the youngest engineer in the team. I might even be one of the youngest in the company. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I had the impression that it was laid-back in a good way. There were no surprises in particular, I am happy with the looseness I expected. It is wonderful that they are so accepting of me wanting to try something! What is the atmosphere like on site? I’d say our team is like a cozy little island. While communication within the team is active and individual opinions are respected, the team is introverted and has room for improvement in exerting more external influence. We found this out through the StrengthsFinder assessment. I was also warmly welcomed after joining the company, making it easy for me to quickly get used to the atmosphere. How did you feel about writing a blog post? I had been tasked with posting tech blogs in my previous job, so I wasn't too concerned about it. Since I'm a naturally shy person, I feel anxious about self-disclosure, but I would be happy if this article sparks your interest in our organization. [Question from yuki.n] Please tell us about what you are currently interested in or pursuing in terms of technology! I am delving into the field of 'prompting skills' to optimize output in tools like ChatGPT. This also comes in handy when using "Sherpa," which is the ChatGPT-based AI language model that we use internally at KINTO Technologies. K Self-introduction I am K from the Project Promotion Division, Project Development Division. I am in charge of Salesforce development and work at the Muromachi Office. My previous job was being a systems integrator (SIer), and I was involved in multi-cloud system implementations regardless of industry. How is your team structured? There are 4 people as a Salesforce team, and about 10 other business partners. What was your first impression of KINTO Technologies when you joined? Were there any surprises? My first impression was that there were many technical study sessions. What is the atmosphere like on site? There are a lot of experienced engineers, and I noticed that they are actively learning new technologies. How did you feel about writing a blog post? I believe that writing for the KINTO Technologies' Tech Blog will be a valuable experience. [Question from Kiyuno] What is the most important mindset in development? I think it is important to be flexible in order to adapt to new situations and deal with evolving technology and changing project requirements. It requires the ability to calmly deal with issues as they arise and find effective solutions. I believe it's important to pursue both creative solutions and routine problem-solving Mukai (mt_takao) Self-introduction My name is Mukai (mt_takao). I joined the company in December. In my previous job, I was primarily a (digital) product designer and product manager for a BtoB product for a taxi dispatch application. At KINTO Technologies, as in my previous job as product designer, I am in charge of the overall design development of products for Toyota dealers. How is your team structured? I am part of the DX Planning Team, Owned Media & Incubation Development Group, Mobility Product Development Division. Our mission every day is to use the power of digital technology to solve the challenges and difficulties faced by Toyota dealers. What was your first impression of KINTO Technologies when you joined? Were there any surprises? My impression is that the onboarding process at the time of joining the company, including orientation, was much more organized than I expected. I had several opportunities to learn about organizational challenges before joining the company. I made my decision to join after fully understanding them, so there were no major surprises. What is the atmosphere like on site? The DX Planning team where I belong is relatively young and many members have recently joined the company. Despite this, we all share the same attitude of moving forward by drawing upon our individual experiences. How did you feel about writing a blog post? I see strengthening our communication ability as a challenge, both on an individual and organizational level, and I am grateful for the opportunity to do so. [Question from K] Was there a particular design that you considered the best in terms of UI/UX? It is quite difficult to call it the best design, but I've been paying attention lately to the Apple Vision Pro . It appears that technologies expanding into the real world with AR and VR have already started to emerge, and I'm thrilled that this tech has finally become a reality. Reference Review of actual Apple Vision Pro: The world of "using the whole space for work" has come (in Japanese) It seems that it is only available in the U.S. now. I would like to experience it when it becomes available in Japan. As a side note, Productivity Future Vision , which describes the future of Microsoft Corporation, is also similar to the world that Apple Vision Pro envisions. If you're interested, please feel free to take a look. Romie Self-introduction I am Romie. I joined the company in December 2023. I belong to the Mobile App Development Group, Platform Development Division. I began working with embedded systems, moved on to the web, and am currently developing mobile applications. In the field of creating mobile apps, I still have a lot to learn. How is your team structured? It is separated into iOS and Android, and I am on the Android team. We are five including me! Three of us are foreign nationals. We are an international team. What was your first impression of KINTO Technologies when you joined? Were there any surprises? I was amazed by everyone’s speed to proactively catch up with the latest technologies. I was impressed by the robust support provided by the company, and pleasantly surprised to find its corporate culture more liberal than I had expected. What is the atmosphere like on site? I feel that we can talk to each other without hesitation and work at ease. Despite our diverse backgrounds, I feel that we form a well-balanced team with a collaborative dynamic without hierarchies. How did you feel about writing a blog post? Output leads to daily reflection, and the more you transmit information, the more attention you get, so I'd like to continue doing it! [Question from Mukai] What do you want to achieve in KINTO Technologies or in the mobility field? I am in charge of mobile app development, so I want to contribute to KINTO Technologies and the mobility field through the app I am entrusted with. To achieve that, I aim to continuously catch up with the latest technology and work on the growth and development of the products in front of me. Conclusion Thank you all for sharing your impressions of after joining the company! The number of new members of KINTO Technologies is increasing day by day! I hope you look forward to more posts introducing our new members joining us to various divisions. Moreover, A is actively seeking professionals who can collaborate across different divisions and fields! For more information, click here !
アバター
Introduction Hello. I am Shimamura, a DevOps engineer at the Platform Group. At KINTO Technologies, the Platform G (Group) DevOps support team (and SRE team) is working on improving monitoring tools and keeping them up-to-date alongside our CI/CD efforts. Our Platform G also includes other teams such as the System Administrator team, CCOE team, DBRE team, among others. In addition to designing, building and operating infrastructure centered on AWS, Platform G is also responsible for system improvement, standardization and optimizations of the entire company. Among these, we introduced an APM mechanism using the Amazon Managed Service for Prometheus (hereafter Prometheus), X-Ray, and Amazon Managed Grafana (hereafter Grafana), that became GA last year, which is the reason I decided to write this article. Background When I joined KINTO Technologies (at that time, part of KINTO Corporation) in May 2021, we were conducting monitoring of AWS resources and log wording. However, this was done using CloudWatch, and the Platform G team was responsible for the design and setup. At that time, the metrics for application operations were not acquired. Log monitoring was also less flexible in configuration, and error detection primarily relied on AWS metrics/logs, or passive detection and response through notifications from external monitors. In terms of the maturity levels commonly referred to in O11y, we were not even at Level 0: “to implement analytics”. However, we were aware of this problem within our team, so we decided to start implementing APM + X-Ray as a starting point for measurement. Here is a reference to the O11y maturity model Element APM (Application Performance Management) To manage and monitor the performance of applications and systems. Also known as application performance management. By examining the response time of applications and systems, as well as component performance, we can understand the overall operational status of applications. This helps us to quickly identify bottlenecks causing system failures, and we can use this information to make improvements. X-Ray A distributed tracing mechanism provided by AWS capable of: Providing system-wide visibility of call connections between services Visualization of the call connections between services in a specific request (visualization of the processing path of a specific request) Quickly identifying system-wide bottlenecks Task (Action) Considerations I first thought about tackling the above mentioned level 0 requisites to implement analytics. During the implementation phase, the idea of using 'Prometheus + Grafana' was introduced. Since it was being previewed as a managed service on AWS at that time, we decided to go with this option. While there are some other commonly used SaaS, such as Datadog, Splunk, NewRelic, DynaTrace, we decided to use AWS without considering the prerequisites. Later on, I began to understand why these SaaS offerings were not being used. I will delve deeper into these reasons later. Implementation Prometheus As for the metrics output to Prometheus: I summarized it in an article titled Collecting application metrics from ECS for Amazon Managed Service for Prometheus which was created as an advent calendar article in 2021 in KINTO Technologies. X-Ray Taking over the documents of the team members at the time of consideration, we organized and documented the incorporation of AWS X-Ray SDK for Java into ECS task definitions, etc., based on the AWS X-Ray SDK for Java . Initial Configuration Improvements OpenTelemetry in X-Ray SDK The team that started using Java17 reached out with concerns about the ServiceMap not displaying correctly in X-Ray. If you look closely, AWS X-Ray SDK for Java declares support for Java8/11, but not for Java17. I decided to move to AWS Distro for OpenTelemetry Java as it currently seems to be recommended. One of the benefits is that it can operate together with the APM Collector. Java Simply download the latest Release jar file from AWS-observability/AWS-otel-Java-instrumentation and save it under src/main/jib to deploy. The SDK for Java also included a definition file for sampling settings, which gives the impression that the introduction is simplified. Environment Variables for ECS Task Definitions Add the Agent's definition to JAVA_TOOL_OPTIONS. We have also added an environment variable for OTEL. Check the json in the task definition in ECS { "Name": "JAVA_TOOL_OPTIONS", "Value": "-Xms1024m -Xmx1024m -XX:MaxMetaspaceSize=128m -XX:MetaspaceSize=128m -Xs512k -javaagent:/AWS-opentelemetry-agent.jar ~~~~~~~ " }, { "Name": "OTEL_IMR_EXPORT_INTERVAL", "Value": "10000" }, { "Name": "OTEL_EXPORTER_OTLP_ENDPOINT", "Value": "Http://localhost:4317" }, { "Name": "OTEL_SERVICE_NAME", "Value": "Sample-traces" } The above is how it will look. (Although it may look a bit different in reality because we use Parameter Store etc.) OpenTelemetryCollector's Config Using Configuration as a reference, modify the Collector's Config as follows: It's in a form where both APM and X-Ray are contained, with metrics labeled for each task. Please note that "awsprometheusremotwrite" used as an exporter has been deprecated since v0.18 of AWS-otel-collector, and the function has been removed from v0.21, so "PrometheusRemoteWrite" "Sigv4Auth" will be used. Receivers: Otlp: Protocols: GRPC: Endpoint: 0.0.0.0:4317 HTTP: Endpoint: 0.0.0.0:4318 Awsxray: Endpoint: 0.0.0.0:2000 Transport: UDP Prometheus: Config: Global: Scrape_interval: 30s Scrape_timeout: 20s Scrape_configs: - Job_name: "KTC-app-sample" Metrics_path: "/actuator/Prometheus" Static_configs: - Targets: [ 0.0.0.0:8081 ] Awsecscontainermetrics: Collection_interval: 30s Processors: Batch/traces: Timeout: 1s Send_batch_size: Resourcedetection: Detectors: - Env - ECS Attributes: - Cloud.region - AWS.ECS.task.arn - AWS.ECS.task.family - AWS.ECS.task.revision - AWS.ECS.launchtype Filter: Metrics: Include: Match_type: Strict Metric_names: - ECS.task.memory.utilized - ECS.task.memory.reserved - ECS.task.CPU.utilized - ECS.task.CPU.reserved - ecs.task.network.rate.rx - ecs.task.network.rate.tx - ECS.task.storage.read_bytes - ECS.task.storage.write_bytes Exporters: . Awsxray: Awsprometheusremotwrite: Endpoint: [apm endpoint] AWS_auth: Region: "Us-west-2" Service: "APs" Resource_to_telemetry_conversion: Enabled: Logging: Loglevel: Warn Extensions: Health_check: Service: Telemetry: Logs: Level: Info Extensions: Health_check Pipelines: Traces: Receivers: [Otlp,awsxray] Processors: Batch/traces Exporters: . [awsxray] Metrics: Receivers: [prometheus] Processors: [resourcedetection] Exporters: . [Logging, awsprometheusremotwrite] Metrics/ECS: Receivers: [awsecscontainermetrics] Processors: [filter] Exporters: . [Logging, awsprometheusremotwrite] Current Configuration Being put into use This step was especially difficult. As mentioned in the introduction, I have been evaluating and providing tools to other teams. It might seem unconventional, but I wanted to optimize tools with a holistic view from the point of view of a DevOps practitioner. That is why I belong to the Platform G, which works across the entire organization, facilitating cross-functional activities. As result, I find myself often in this situation: Platform G = the party that sees issues People in charge of applications = the party unaware of issues But recently, through our consistent dedication, I think people have come to understand the importance of our efforts. A case study where we didn't use SaaS The following are my personal reflections. There's a general perception that SaaS solutions, especially those related to O11y tend to accumulate large amounts of data, leading to high overall costs. Paying a significant amount for “unused” tools until their utility is understood remains challenging in terms of cost effectiveness. As you progress towards actively addressing O11y's maturity level 2, there will be a demand to oversee bottlenecks and performance from a bird’s-eye view, which is where the value of using them may emerge. Connecting logs and metrics to events etc. Even if tools are divided, I think it is acceptable for each person's task load and amount of passive response. It can't be helped that Grafana's Dashboard can only be created temporarily. If the cost of SaaS is lower than that of maintaining a dashboard, then migration will happen. Or so I think. Impressions Grafana, Prometheus, and X-Ray are managed services and are not as easy to deploy as SaaS, but they are relatively inexpensive in terms of cost. In the early stages of DevOps and SRE efforts, it may be worth considering this aspect when deploying O11y. I've heard concerns about using SaaS, but after adopting it, I now appreciate the value of O11y, of reviewing improvements and activities, and comparing costs before starting to use various SaaS. Overall, I feel positive about it. Tools like Datadog, New Relic dashboards, or HostMap will offer visually appealing designs, giving you a sense of active monitoring as you see data dynamically represented (`・ω・´) I mean, why not!! They look so cool!
アバター
​ はじめに 前回はパート1として、バリアブル機能、商品数の増減機能、小計の設定についてご説明しました。 今回はその続きで、カート内の商品数を2つに増やし、その上で小計を設定する方法、送料無料の条件を設定する方法、合計金額を出す方法、そして送料無料になった際に表示するメッセージを変更する方法について説明します。 Figmaのバリアブル機能を使ってショッピングカートのモックアップを作ってみよう!パート2 ![](/assets/blog/authors/aoshima/figma2/1.webp =300x) ショッピングカート 完成図 【パート1】 バリアブルとは パーツ作成 まずはカウントアップ機能 変数の作り方、割当方 カウントアップ機能の作成 小計設定 ​ 【パート2】 商品を2つに増やしてみる​ 小計の設定 送料無料設定 合計の設定 送料無料の文言に変更 完成 商品を2つに増やしてみる 商品をカートに2つ入っている状態にするために、まずはパート1で扱った商品情報をコピーします。それから、商品の写真、名前、価格、そしてカート内の商品点数を示す数字を更新します。(複製方法についてはコンポーネントのバリアント機能を使って商品を追加する方法でももちろん構いません。) ![](/assets/blog/authors/aoshima/figma2/2.webp =300x) 元々の商品を複製し、その際に商品名、商品写真、値段も変更します。 以下の説明では、最初にあった商品(SPECIAL ORIGINAL BLEND)を「商品A」とし、新たにコピーして追加された商品(BLUE MOUNTAIN BLEND)を「商品B」と呼びます。 さらにこの時、商品A同様に商品Bの個数を表す数字にバリアブル「Kosu2」を割り当て、パート1を参考に商品Bのプラスおよびマイナスボタンにもカウントアップ機能を設定しておきます。 小計の設定 パート1で行った小計設定の応用となります。 バリアブル作成と割当て まずパート2では2つの商品(商品Aと商品B)がそれぞれ1個ずつカートに入っている状況を想定しているため、ローカルバリアブル内の「Shoukei」の値を合計金額の250(商品Aが¥100×1個 + 商品Bが¥150×1個)に更新します。この変更を行うと、このバリアブルに紐づけられているキャンバス上の数字が自動的に更新されて、新しい合計金額が表示されます。 ![](/assets/blog/authors/aoshima/figma2/3.webp =300x) ローカルバリアブルのリスト。赤枠は小計の数字に割り当てるバリアブル。 ![](/assets/blog/authors/aoshima/figma2/4.webp =300x) 小計にローカルバリアブル「Shoukei」の数値が反映されている状態。 ボタンへのアクション入力 パート1では、小計は商品Aのみの合計金額として計算したため、以下の図のように設定しました。 商品Aのプラスまたはマイナスボタンをクリックした際に変化させたいバリアブル「Shoukei」を選択し、そしてその時にどうなるかを表す式として、商品の個数を表すバリアブル「Kosu1」x 100(商品Aの単価)の数値を記入しています。 ![](/assets/blog/authors/aoshima/figma2/5.webp =300x) パート1で設定した小計設定の式 今回はこの基本に従って小計が商品Aと商品Bの個数の合計金額となるように下図の様に式を商品A・Bのプラス・マイナスボタンに設定します。 ![](/assets/blog/authors/aoshima/figma2/6.webp =300x) 商品Aのプラスボタンの設定内容。点線で囲まれた部分が小計の設定範囲。実践赤枠 左が商品A、右が商品Bを表しています。 この設定により、プラスおよびマイナスボタンを押すたびに小計が計算され更新されるようになります。プレビュー画面でボタン操作を試すと、2つの商品の合計金額が小計に正しく反映されていることが確認できます。 ![](/assets/blog/authors/aoshima/figma2/7.gif =300x) 送料無料の設定 次に「¥1,000以上のお買い上げで送料無料!」となる設定方法を説明します。 設定する送料の条件は以下の通りです。 1.小計が¥1,000未満の場合、送料として¥500が加算されます。 2.小計が¥1,000以上の場合、送料は無料になります。 バリアブル作成と割当て まず送料を表す数字にバリアブルを割り当てます。 このモックアップでは初期状態のカートには商品がそれぞれ1点ずつ入っており、小計は¥250、送料は¥500を想定していますので、新規作成するバリアブルの名称を「Shipping」とし、値を500に設定し送料横の数字に割り当てます。 ![](/assets/blog/authors/aoshima/figma2/8.webp =300x) 送料横の数字にバリアブル「Shipping」を割り当てた状態 ボタンへのアクション入力 次に小計を計算するボタンアクションの設定を行います。 小計の金額が¥1,000未満かそれ以上かといった条件により、結果として送料金額が分岐することになりますので、if文を使用します。 小計が¥1,000未満の場合送料は¥500なので、以下のように表すことができます。 ![](/assets/blog/authors/aoshima/figma2/9.webp =300x) この式は小計が¥1,000未満の場合「Shipping」の値を500にするということを意味しています。 ちなみに「Shipping」には元々500の値を設定しているので、わざわざ同じ値を入れる必要があるのか疑問に思うかもしれません。しかしこの設定をしておくと、小計が¥1,000以上になって送料が¥0に設定された後、もし小計が再度¥1,000未満になる場合、送料を¥0から¥500に戻すことが可能になります。 続いて小計が¥1,000以上の場合送料は¥0となりますので、以下の赤枠内のように表すことができます。 ![](/assets/blog/authors/aoshima/figma2/10.webp =300x) こちらは小計が¥1,000以上の場合、「Shipping」の値を0にするということを意味しています。 ちなみに「else」は、「if」で設定された条件以外の場合を指します。 この場合「if」が¥1,000未満の状況を指すので、「else」はそれ以外、つまり¥1,000以上の場合を意味します。 各ボタンへ以上の設定を行いプレビューすると、小計が¥1,000を超えた時点で送料が「¥0」と表示されることが確認できます。このように設定することで、小計の金額に応じて送料が自動的に調整されるようになります。 ![](/assets/blog/authors/aoshima/figma2/11.webp =300x) 小計が¥1,000を超えると送料が¥0に 合計の設定 つづいて合計金額の設定へと移ります。 バリアブル作成と割当て 合計金額を示すバリアブルは、「Total Amount」を略して「T_Am」とします。 繰り返しで恐縮ですが、このモックアップではカートに商品AとBがそれぞれ一つずつ入っており、その小計が¥250、送料が¥500である状態を想定しています。従って合計金額の初期値である750を「T_Am」に設定します。 合計金額を示す数字にバリアブル「T_Am」を割り当てることで、「750」という値が表示されます。 ![](/assets/blog/authors/aoshima/figma2/12.webp =300x) 合計金額にバリアブルを割り当てた状態 ボタンへのアクション入力 合計金額についても小計が¥1,000未満かそれ以外かによる条件分岐の設定が必要となります。 条件は送料の設定と同じになるので、アクション設定を追加していきます。 if文の横をマウスオーバーすると「ネストされたアクションを追加」という文言とともに「+」ボタンが出てきますので、そのままボタンを押下すると追加設定用のスペースが出現します。 一つの条件でいくつもアクションを追加したい場合はこの様に追加することが可能です。 ![](/assets/blog/authors/aoshima/figma2/13.webp =300x) 小計が¥1,000未満の場合、合計金額 = 小計 + 送料となり以下の赤枠内のような記述になります。 ![](/assets/blog/authors/aoshima/figma2/14.webp =300x) 一方小計が¥1,000以上の場合は合計金額 = 小計(+送料¥0)となり以下の赤枠内のような記述になります。こちらは「else」内の記述となりますとなりますので、ご注意ください。 ![](/assets/blog/authors/aoshima/figma2/15.webp =300x) 各ボタンへ設定を行いプレビューすると、小計が¥1,000に達した時点で送料が¥0になり合計金額にもそれが反映されている様子がで確認できます。 ![](/assets/blog/authors/aoshima/figma2/16.webp =300x) 送料無料の文言に変更を加える 最後に送料無料の文言に変更を加えていきます。 ここでは送料が無料になったらヘッダーの下にある送料無料の文言(赤枠部分)を非表示にしたいと思います。 ![](/assets/blog/authors/aoshima/figma2/17.webp =300x) バリアブル作成と割当て 表示 / 非表示などを切り替える場合、よく使われるのがブーリアンバリアブルです。ブーリアンとは「真・偽(true・false)」「はい・いいえ」などの二者択一の条件を表すために使用されるデータの型です。 ちなみに今回の様に表示 / 非表示の切替えでは、Figmaでは自動的に「true」= 表示、「false」= 非表示の設定がほどこされますので、その設定をそのまま使用します。 まずローカルバリアブルを開き、バリアブル作成ボタンを押します。その際にデータ型として「ブーリアン」を選択します。名称は送料に関連する文言なので「Ship_Txt」としました。 カートの初期状態では小計は¥1,000未満となり、文言は表示される必要があるため初期値は「true」とします。 ![](/assets/blog/authors/aoshima/figma2/18.webp =300x) ローカルバリアブルでブーリアンを作成し、初期値をtrueとした状態 次は、作成したバリアブルを割り当てる手順を説明します。 まず、バリアブルを割り当てたいオブジェクトをキャンバス上で選択します。次に、画面右側のパネルにある「レイヤー」セクションを見て、パススルー(透過)の横にある「目」のアイコンを右クリックします。このアイコンは直接的には表示されないため、見つけにくいかもしれません。 右クリックすると、割り当て可能なバリアブルのリストがドロップダウンメニューとして表示されます。そこから、先に作成したバリアブルを選択します。 ![](/assets/blog/authors/aoshima/figma2/19.webp =300x) ボタンへのアクション入力 文言の表示 / 非表示についても小計金額よる条件分岐の設定をするため、アクション設定を追加していきます。 小計金額が¥1,000未満の場合は文言を表示(「Ship_Txt」= true)するので、下図のような記述を追加します。 ![](/assets/blog/authors/aoshima/figma2/20.webp =300x) ブーリアンバリアブル「Ship_Txt」を「true」へ変化させる記述内容 一方で、小計金額が¥1,000以上の場合は文言を非表示(「Ship_Txt」= false)とするので、以下のような記述を追加します。こちらは「else」内の記述になりますのでご注意ください。 ![](/assets/blog/authors/aoshima/figma2/21.webp =300x) ブーリアンバリアブル「Ship_Txt」を「false」へ変化させる記述内容 各ボタンへ設定を行い、プレビューを実行すると小計が¥1,000に達した時に文言部分が非表示になる様子がで確認できます。 ![](/assets/blog/authors/aoshima/figma2/22.webp =300x) 無事に文言を非表示にすることができました。 しかし余分なスペースが空いてしまいレイアウトの観点からあまり良いとは思えないので、文言自体を変更する方法も試みたいと思います。 送料無料の文言に変更を加える ver.2 バリアブル作成と割当て では送料の有り / 無しを想定して、文言は以下の2種類が入れ替わる設定を行っていきます。 ¥1,000未満の場合は、「あと◯◯円のお買い上げで送料無料!」 ¥1,000以上の場合は、「送料無料!」 送料無料の文言部分を「Ship_Txt_Panel」という名称でコンポーネント化し、バリアントを2種類作成しました。切り替えて使いたいので、それぞれのプロパティにブーリアン値を入力します。 ![](/assets/blog/authors/aoshima/figma2/23.webp =300x) まずは上のバリアントを選択し、画面右側のパネルのプロパティ変更セクションを表示させます。 こちらのバリアントは初期状態で表示される想定なので、こちらを「true」に設定します。 ![](/assets/blog/authors/aoshima/figma2/24.webp =300x) 次に下のバリアントのプロパティをfalseに設定します。 ![](/assets/blog/authors/aoshima/figma2/25.webp =300x) プロパティの設定を終えたら、デザイン上にコンポーネントのインスタンスに配置します。その時にインスタンスを選択した状態で右側のパネルを確認すると、インスタンスセクションにブーリアンのトグルスイッチが表示されます。 ![](/assets/blog/authors/aoshima/figma2/26.webp =300x) デザイン上に配置されたインスタンスを選択中 ![](/assets/blog/authors/aoshima/figma2/27.webp =300x) 右側のパネルのトグルスイッチがtrueの状態 このトグルスイッチを切り替えるとインスタンスの内容が切り替わり、ブーリアン値の設定ができていることが確認できます。 ![](/assets/blog/authors/aoshima/figma2/28.webp =300x) 右側のパネルのトグルスイッチをfalseに切り替える ![](/assets/blog/authors/aoshima/figma2/29.webp =300x) インスタンスの内容が切り替わります。 さらにここでトグルスイッチにカーソルをマウスオーバーすると、フロートテキストと共にバリアブル割当て用のアイコンが出現しますので、そのままクリックすると候補のバリアブルが出現しますので、その中からブーリアンバリアブル「Ship_Txt」を選択し、インスタンスに割り当てます。 ![](/assets/blog/authors/aoshima/figma2/30.webp =300x) 赤枠のアイコンをクリックするとバリアブルの候補が出現します。。 ![](/assets/blog/authors/aoshima/figma2/31.webp =300x) インスタンスにバリアブルが割り当てられた状態。 ボタンへのアクション入力 ここでのボタンアクションの記述は、先程送料無料の表示 / 非表示を切り替えるために設定した内容と同じですので、記述の修正などは必要ありません。 早速プレビューしてみると、小計が¥1,000を超えたところで文言が変更されているのが確認できます。 ![](/assets/blog/authors/aoshima/figma2/32.webp =300x) では最後に文言内の金額部分が小計に合わせて変化する設定を行っていきたいと思います。 バリアブル作成と割当て コンポーネント内の文言自体を修正し、可変となる金額部分と残りの文言部分を分けます。 ![](/assets/blog/authors/aoshima/figma2/33.webp =300x) 可変となる金額部分がハイライトされている状態 次に可変部分に割り当てるバリアブルを作成します。データ型は「数字」を選択し名称は「Extra_Fee」とします。このバリアブルの値は、送料無料の条件となる¥1,000までの差額を示しています。従ってカートの小計が¥250なので、¥1,000 - ¥250 = ¥750となることから「Extra_Fee」の値を「750」に設定します。 ![](/assets/blog/authors/aoshima/figma2/34.webp =300x) バリアブルを可変部分の数字に割り当てると以下のようになります。 ![](/assets/blog/authors/aoshima/figma2/35.webp =300x) ボタンへのアクション入力 この可変部分が小計の増減に応じて変化するように、以下のように設定します。ちなみに小計が1,000以上の場合、こちらの文言自体切り替わってしまいますので設定は不要となります。 ![](/assets/blog/authors/aoshima/figma2/36.webp =300x) 完成 各ボタンに設定を行いプレビューを実行すると、プラス(マイナス)ボタンの押下に応じて文言内の金額が変化すること、小計金額が¥1,000を境に中の文言が変化することが確認できます。 ![](/assets/blog/authors/aoshima/figma2/37.webp =300x) 「Figmaのバリアブル機能を使ってショッピングカートのモックアップを作ってみよう!」の説明は以上になります。説明の過程で紹介した機能は様々な場面で応用が効くと思われますので、ご活用して頂けるとありがたいです。
アバター
Introduction Hello, I am Sugimoto from the Creative Office. This article is the second part in our two-part series introducing the creation of our mascot character design. Our first post detailed our journey since we received the request to its conceptualization. Firstly, every employee is a part of the path we have been working on. Secondly, the mascot character project (referred to as “the PJ” from now on) has chosen its selection based on KINTO's vision and brand personality, as well as future developments and branding, rather than simply focusing on popularity. Thirdly, a survey was conducted among all employees, utilizing character concepts volunteered by employees. The purpose of the poll was to discuss the popularity of ideas that revolved around the motif of “clouds”, which is also where KINTO’s corporate name came from. Specifically, the following characters were the most popular. The left one, for its ability to shape-shift at will, while in the case of the right one, its charm was that it’s a cloud transformed into a car. By the way, as a manager, I feel relieved that both ideas from the Creative Office were selected. These two proposals gained popularity. Bringing Life to a Cloud Motif 1. Not all deliverables should be handled in-house! Based on the above two proposals, next up was illustrating. People often assume that all designers can take pictures, make videos, and even draw illustrations. I frequently get questions like, "Since there is no money to outsource, can we do this in-house?" or "Can't you just use AI to make it quickly?" Among our in-house designers, of course, there are team members who are good at illustrating. However, what is important here is to discern when to delegate certain tasks to specialists, like in the Japanese expression to "leave mochi-making to mochi shops". We are breathing life into a character here! We need to make a distinction between a well-drawn illustration and an illustration that brings it to life. As a creator myself, I strongly believe that the most respectful approach is to seek input from specialists in illustration and character design when aiming to create something of quality. I would say that this is an example of deliverables that cannot and should not be done in-house. 2. Then what should we do? Outsourcing illustration Luckily, despite having budget, the business side of the PJ team was filled with team members who genuinely respected the creation process. They did not simply say, "designers can draw illustrations too so they should do it" and instead, they worked hard to increase the production budget for the illustrations. We decided to rely on Steve* Inc. , a creative company that specializes in branding and planning design for different companies, products, and communities. They worked with us to create a story that brought the character to life while staying close to the concept that PJ wanted to uphold. What we, the Creative Office, requested Steve* Inc. was a character that even adults would want to have, for example when made into merchandise. We requested a tone that is cute, adorable, while also appealing to adults. Based on our request, they provided three proposals for characters with a cloud motif. A: The Mysterious Creature K, B: Haguregumo, and C: Kumorisu (the squirrel-cloud). With expressions that made the viewer want to protect them, they all appeared to be watching from above—indeed, Steve* Inc. did a great job! All the PJ members listened to the presentation with excitement. Next, we conducted a survey asking all employees to share the good points and their concerns about each of the proposals. By conducting this survey and judging from multiple perspectives, we were able to delve deeper beyond just the appearance and gain insights into the character's potential issues. As a result of the survey, we decided on proposal A, "The Mystery Creature K"! "Mysterious Creature K": I’m Sure You’ll Ask, “Is This Really Its Name”? Wouldn't it be interesting to take advantage of this impactful name and proceed with promotions that left a sense of "mystery" in the name and its existence? This is why the meetings with marketing and social media staff became also very exciting. Now that the form was decided, the next step was to polish it up. Additionally, it was renamed, written in hiragana, to enhance readability and familiarity. The "Mysterious Creature K" Starts to Take Shape! I think that the somewhat absentminded look on "Mysterious Creature K" is also pretty good. However, we continued to refine its form and facial features to ensure it will be beloved for years to come. Specific examples: We wanted to give it a closer form to the alphabet "K" (as some suggested that at a quick glance, it might not be immediately recognizable as such). We wanted to give it a little more of a cloud-like appearance (as it initially resembled hand soap bubbles or marshmallows). We wanted to slightly adjust the balance between the eyes and the body, considering the cloud motif. (A bit more of a sense of balance, like the early illustrations of Steve* Inc., where the clouds look bigger.) We wanted to create a little more difference between the white parts of eyes and the "cloud" of the 3D version of K's eyes (only the black parts were visible, so we wanted to adjust the edge lines and shadows to make them look a little more like 2D K's eyes and keep the cuteness. We also thought that a matte texture might be more suitable.) We wanted to add more brand colors to the 3D version of K (on the black parts of the eyes, body shadows, etc.) In addition to the 2D illustrations, we decided to make 3D ones as well to ensure they seamlessly integrate with vehicle images. And what about the fluffiness? While it may work well in illustrations, how will it translate into a costume? What color should the eyes be? Should the character have no mouth and remain silent, refraining from engaging in sales talk? These were all things we considered as we developed the character's personality and characteristics. And this is its current form and expression! Its naming campaign was launched in July 2023 and received a total of 932 submissions. Among them, we grouped the options based on different criteria ( see Part 1 ) to determine the best name. Kumo no Kinton Kumobii Mysterious Creature K K Although both "Kumo no Kinton" and "Kumobii" were popular among our customers (KINTO subscribers in Japan), "Kumobii" emerged as the most popular choice among the target generation of teens to 30-year-olds, and internal voting confirmed its first-place ranking. Hence, we opted for the name "Kumobii." The name Kumobii is derived from the combination of "Kumo (cloud)" and "mobility," which is very KINTO-like, and the PJ members were satisfied with this name. This is how "Kumobii" was born. I believe that there will be more opportunities for its exposure in company promotions from now on. We're excited for you to see the promotions! Check out the unique features of "Kumobii"! ▼ Click here for the story of Kumobii ▼ @ card
アバター
はじめに こんにちは。 KINTOテクノロジーズ モバイルアプリ開発グループの中口です。 2024年5月23日に開催された TechBrew in 東京 モバイルアプリの技術的負債に向き合う に参加してきましたので、その様子をレポートします。 当日の様子 会場は オフィスが新しくなったFindyさん です。 噂には聞いていましたが、とても広くて綺麗なイベントスペースがありテンションが上がりました😀 また、TechBrewという通りお酒や軽食もたくさん用意されており非常に和やかな雰囲気でした。 ただ、私はこの後LTの登壇があったため発表が終わるまではお酒は控えました👍 LT1「Bitkeyのモバイルアプリを進化させるための歩き方」 Bitkeyさんのモバイルアプリの現在に至るまでの歴史をお話しいただきました。 元々React Nativeで作られていたアプリを、ネイティブ化→SwiftUI導入→TCA導入という形で進化させていったとのことです。 ただし、SwiftUI導入は道半ばとのことで失敗だったかもしれないとのことでした。 やはり、iOSのバージョンによってSwiftUIの挙動が変わるため、そこに苦労されたとのことで、私も同じ経験あるなぁと共感いたしました。 LTの中で「自分たちがいいと思うもの全て正解だ」、「当時の判断は、きっとその時正解だった」という発言がとても印象的で、確かにおっしゃる通りだなと思いました。 発表者のあらさんとは、LT後の懇親会でもお話しをする機会があり、Swift on Windowsについてのお話など、私が知らないこともたくさん知っていてとても楽しくお話しさせていただきました。 LT2「モバイルアプリの技術的負債に全社を挙げて取り組む考え方」 技術的負債とは何か?および、それにどう立ち向かうかについてお話しいただきました。 技術的負債の中でも、 認知しているが、リターンを得るために受け入れた 認知しておらずそもそも負債だと気が付いていなかった、または環境変化により負債になったもの これらを分けて考える必要があるとのことでした。 前者は大きな問題にはならないが、後者を放置しすぎてしまうと許容を超える問題が発生する可能性があるとのことです。 技術的負債に立ち向かうためには、ビジネスタスクを一旦止めてでも負債解消の時間をもらう交渉が必要で、負債は開発チームだけでなくステークホルダー含めたみんなの問題と捉えて対応する必要がある、という点とても納得です。特にエンジニアマネージャーやチームリーダーなどは、そういった交渉力が重要だと実感いたします。 また、状況の見える化のため FourKeysを使用しているとのことですが、数字を目標にしすぎることは危険だということでした。 私も常々、チームの開発力の見える化は難しいなと感じておりFourKeysのようなフレームワークに頼りすぎないように注意しております。 LT3「Safie Viewer for iOS での技術的負債との付き合い方」 リリースして10年経つアプリの開発をしているとのことで、そこで抱える悩みやその対応方針についてお話しいただきました。 使っている技術もリリース当時のものが多く残っており、リアーキテクチャしたいものの、致命的な何かが起きていないのも事実で、現状でも多くの追加機能をリリースできている状態とのことです。 そうなると時間をかけたリファクタリングを行う説明ができず、なかなか負債解消に動けない状況とのことでした。 現在は大きく下記の2軸の方針で、できることから対応されているとのことです。 すぐにできる対応はすぐ行う Xcodeの新バージョンが出たら即アップデート バージョンを上げないと書けないコードがある=レガシーコードを生む Danger導入 腰を据えての対応 今はMVC/MVP 非同期処理はクロージャーベース この状態からのリアーキテクチャーはリスキー 新機能のみ、モダン技術の検証をしていこう 実際に取り組むためには、具体的なスケジュールを引く必要あるとのことでなるほどなと感じました。 私も、大きなリファクタリングはかなり躊躇してしまうのでスケジュールをしっかり引いてやり切ることが大事だなと思いました。 LT4「パッケージ管理でモバイル開発を安全に進める」 LT3と同様、こちらも8年と歴史のあるアプリで、共通化や分離に焦点を当ててどのように負債を解消していったかについてお話いただきました。 最近抱える悩みとして、過度な共通化をしている箇所がたくさんあるとのことでした。 例として、Channelのデータに100くらいのパラメータを持っている(登壇者様の表記を流用させていただいています)状態になってしまっているようで、毎回全て使うわけでは無いデータを持ってしまっている、という状況があちこちにあったとのことでした。 一方で責務を分けすぎることも注意が必要とのことです。 一箇所からしか呼ばれていないのに分離されており、やりすぎな状態も散見されるとのことでした。 「考えて共通化する」「考えて責務分けする」ことが大事ということが、とても印象的で私も深く考えず分離していたことがあったような気がしています。。。 そして、これらはPackage Managerを使って管理する方法が良いということで考え方や方法を紹介いただきました。 LT5「GitHub Copilotで技術的負債に挑んでみる」 私の発表になります。 発表内容は こちら です。 XcodeにおけるGitHub Copilotの利用は、まだまだ公式対応しているVScodeなどに比べると制限が多く、利用率が伸びていない状況かと感じております。 一方で、XcodeでもChat機能に関しては技術的負債解消に貢献できると感じたのでその点を発表してきました。 途中、Chat機能を実際にデモしたのですが、会場の視線が一段と集中したのを感じ、皆さんが興味を持って聞いてくれていたようでとても嬉しかったです。 社外のイベントでの登壇は初めてでしたが会場の皆様が暖かく聞いていただけたので、無事発表を終えることができました。 終わりに LT終了後は懇親会があり、たくさんの方と情報交換をさせていただきました。 非常に良い刺激になり、今後もこういった社外イベントへの参加や登壇を積極的に行っていきたいと感じました。 本イベント主催の高橋さんともお話しすることができ、弊社のモバイルグループとFindyさんで何かイベントができたらいいですね、という話をさせていただいたので今後こういったことも積極的に取り組めたらなと考えております。 お土産にFindyさんが作られたIPAをゲットしてきました!
アバター
Unit testing with Flutter Web Hello. I am Osugi from the Woven Payment Solution Development Group. My team is developing the payment system that will be used by Woven by Toyota for the Toyota Woven City . We mainly use Kotlin/Ktor for backend development and Flutter for the frontend. In Flutter Web, errors in test runs can occur when using web-specific packages. Therefore, in this article, I would like to summarize what we are doing to make Flutter Web code testable, with a particular focus on unit testing. If you're interested in reading about the story behind our frontend development journey thus far, feel free to check out this article: A Kotlin Engineer's Introduction to Flutter and Making a Web App Within a Month The Best Practices Found by Backend Engineers While Developing Multiple Flutter Applications at Once What is Flutter Web? First of all, Flutter is a cross-platform development framework developed by Google, and Flutter Web is a framework specialized for web application development . Dart, a Flutter's development language, can convert source code to JavaScript in advance and perform drawing processes using HTML, Canvas, and CSS, allowing code developed for mobile applications to be ported directly to web applications. How To Implement Flutter Web Basic implementation can be done in the same way as mobile application development. On the other hand, what if you need to access DOM manipulations or browser APIs? This is also available in Dart's built-in packages for web platforms such as dart:html ^1 . For example, the file download function can be implemented in the same way as general web application development with JavaScript. :::message The SDK version at the time of writing is for Dart v3.2, Flutter v3.16. ::: The widget below is a sample application with a function that does not know what it is used for, which is to download the counted-up numbers as a text file. The text file will be downloaded by clicking the Floating Button. import 'dart:html'; import 'package:flutter/material.dart'; class MyHomePage extends StatefulWidget { const MyHomePage({super.key, required this.title}); final String title; @override State<MyHomePage> createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { int _counter = 0; void _incrementCounter() { setState(() { _counter++; }); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( backgroundColor: Theme.of(context).colorScheme.inversePrimary, title: Text(widget.title), ), body: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ const Text( 'You have pushed the button this many times:', ), Text( '$_counter', style: Theme.of(context).textTheme.headlineMedium, ), IconButton( onPressed: _incrementCounter, icon: const Icon(Icons.add), ) ], ), ), floatingActionButton: FloatingActionButton( onPressed: () { AnchorElement(href: 'data:text/plain;charset=utf-8,$_counter') ..setAttribute('download', 'counter.txt') ..click(); }, tooltip: 'Download', child: const Icon(Icons.download), ), ); } } Unit Testing Flutter Web Code The test code for the previous sample code is prepared as follows (mostly as it was when flutter create was done and output). import 'package:flutter/material.dart'; import 'package:flutter_test/flutter_test.dart'; import 'package:sample_web/main.dart'; void main() { testWidgets('Counter increments smoke test', (WidgetTester tester) async { await tester.pumpWidget(const MyApp()); expect(find.text('0'), findsOneWidget); expect(find.text('1'), findsNothing); await tester.tap(find.byIcon(Icons.add)); await tester.pump(); expect(find.text('0'), findsNothing); expect(find.text('1'), findsOneWidget); }); } Run the following test command or the above test code from the Testing tab of VS Code. $ flutter test If you run the test as it is, you will probably get an error like the following. Error: Dart library 'dart:html' is not available on this platform. // omitted lib/utils/src/html_util.dart:4:3: Error: Method not found: 'AnchorElement'. AnchorElement(href: 'data:text/plain;charset=utf-8,$data') Apparently, there is something wrong with importing dart:html . Platform-Specific Dart Compiler The official documentation indicates that running the Dart compiler requires: Native platform which includes Dart VM with JIT compiler and AOT compiler for producing machine code. Web platform to transpile Dart code into JavaScript. We can see there are two platforms. In addition, some of the packages available for each platform appear to be different. Platform Available Packages Native dart:ffi, dart:io, dart:isolate Web dart:html, dart:js, dart:js_interop etc. So, it turned out that the above test was running on a VM, and therefore dart:html was not available. Specifying the platform at test runtime is one way to avoid import errors for Web platform packages. You can specify that the test should run on Chrome (as a web) by running the command with the following options ^2 . $ flutter test --platform chrome :::message It can be confirmed that the no-option test is on a VM with flutter test --help --verbose . --platform Selects the test backend. [chrome] (deprecated) Run tests using the Google Chrome web browser. This value is intended for testing the Flutter framework itself and may be removed at any time. [tester] (default) Run tests using the VM-based test environment. ::: Should Flutter Web Test Code Run on Chrome? When developing web applications, using the browser API is inevitable, but should the Flutter Web test code be run on Chrome? In my personal opinion, it is better to avoid using Chrome as much as possible. The reasons are: Running tests requires Chrome to be launched in the background, which increases test launch time. Chrome must be installed in the CI environment, which increases the container size of the CI environment. Or it may take a long time to set up containers, which would considerably increase the monetary cost of a CI environment. (Of course, if you just want to do a quick local check or if you are a wealthy person, no problem!) In fact, I have included the results of running the local environment comparing the standard case (Native) with no platform specified and the case (Web) with Chrome specified. Platform Program run time (sec) Total test run time (sec) Native 2.0 2.5 Web 2.5 9.0 From the table above, the Web actually took significantly longer to launch the test. You will also notice that test run time has also increased by about 25%. ![tester](/assets/blog/authors/osugi/20240301/annoying.png =400x) Separate Web Platform-Dependent Code Can the above error be avoided without specifying a web platform? In fact, Dart also offers conditional imports and exports for packages, along with flags to determine whether the platform is Web or Native ^3 . Flag Description dart.library.html Whether a Web platform dart.library.io Whether a Native platform These can be used to avoid errors. First, prepare the download function module for Web and Native as follows, and separate the aforementioned web package usage part from the code to be tested. import 'dart:html'; void download(String fileName, String data) { AnchorElement(href: 'data:text/plain;charset=utf-8,$data') ..setAttribute('download', fileName) ..click(); } void download(String fileName, String data) => throw UnsupportedError('Not support this platform'); Here is how to switch the import of the above module for each platform. import 'package:flutter/material.dart'; - import 'dart:html' + import './utils/util_io.dart' + if (dart.library.html) './utils/util_html.dart'; class MyHomePage extends StatefulWidget { // omitted } class _MyHomePageState extends State<MyHomePage> { // omitted @override Widget build(BuildContext context) { return Scaffold( // omitted floatingActionButton: FloatingActionButton( onPressed: () { - AnchorElement(href: 'data:text/plain;charset=utf-8,$_counter') - ..setAttribute('download', 'counter.txt') - ..click(); + download('counter.txt', _counter.toString()); }, tooltip: 'Download', child: const Icon(Icons.download), ), ); } } If you want to export, you will have to prepare a separate intermediary file such as util.dart and import it from the Widget side. (I will omit it here.) export './utils/util_io.dart' if (dart.library.html) './utils/util_html.dart'; You can now run your tests on the Native platform, avoiding errors caused by Web-dependent code. Let's also create stubs for the Native platform for Web platform-dependent external packages Our system uses Keycloak as its authentication infrastructure. The following package is used for Keycloak authentication on Flutter web applications. @ card If you open the link, you'll see this package only supports the web. Thanks to this package, the authentication process was implemented with ease. However, due to the nature of the authentication module, its interface is used in various places. Consequently, all API calls and other widgets that require authentication information are dependent on the web platform, making it impossible to test with CI. (In the meantime, we have been testing locally with the --platform chrome option, and if all passed, it was OK.) In addition, when you import this package, the following error occurs during test execution. Error: Dart library 'dart:js_util' is not available on this platform. Therefore, I will do the same procedure for external packages as the aforementioned import separation, but here I would like to practice the pattern using export. The procedure is as follows. 1. Creating an intermediary package As an example, I have created a package called inter_lib in the sample code package. flutter create inter_lib --template=package In the actual product code, the external package is intermediated by creating a package separate from the product in order to prevent the code according to the external package from being mixed in the product code. I recommend using Melos because it makes multi-package development easy. 2. Creating a stub for the Native platform To create a stub for keycloak_flutter , refer to the Github repository and simulate the interface (Please check the license as appropriate). All classes and methods used on the product code are required. @ card The file created appears as follows. A prefix of stub_ below the src directory is a simulation of the external package interface. inter_lib ├── lib │ ├── keycloak.dart │ └── src │ ├── stub_keycloak.dart │ ├── stub_keycloak_flutter.dart │ └── entry_point.dart Also, entry_point.dart was defined to export the same as the actual external package (In fact, only the interface used in the product code is sufficient). export './stub_keycloak.dart' show KeycloakConfig, KeycloakInitOptions, KeycloakLogoutOptions, KeycloakLoginOptions, KeycloakProfile; export './stub_keycloak_flutter.dart'; To internally publish this inter_lib as a package, configure export as follows. library inter_lib; export './src/entry_point.dart' if (dart.library.html) 'package:keycloak_flutter/keycloak_flutter.dart'; 3. Add he intermediary package to dependencies in pubspec.yaml Add a relative path to inter_lib to pubspec.yaml . // omitted dependencies: flutter: sdk: flutter cupertino_icons: ^1.0.2 + inter_lib: + path: './inter_lib' // omitted Then, replace the original reference to an external package with inter_lib . - import 'package:keycloak_flutter/keycloak_flutter.dart'; + import 'package:inter_lib/keycloak.dart'; import 'package:flutter/material.dart'; import 'package:sample_web/my_home_page.dart'; void main() async { WidgetsFlutterBinding.ensureInitialized(); final keycloakService = KeycloakService( KeycloakConfig( url: 'XXXXXXXXXXXXXXXXXXXXXX', realm: 'XXXXXXXXXXXXXXXXXXXXXX', clientId: 'XXXXXXXXXXXXXXXXXXXXXX', ), ); await keycloakService.init( initOptions: KeycloakInitOptions( onLoad: 'login-required', enableLogging: true, checkLoginIframe: false, ), ); runApp( const MyApp(), ); } The above outlines the process of creating a stub for the Native platform of a Web platform-dependent external package. Now the test can be run in VM. This method can of course be applied in addition to the keycloak_flutter used in this example. ![successful people](/assets/blog/authors/osugi/20240301/success.png =480x) Summary This article summarized our approach to maintaining Flutter Web code testable. Dart's execution environment includes a Web platform and a Native platform flutter test is a native platform execution, and if using a package for a web platform such as dart:html would cause an error It can be solved with an implementation that switches between the real package and stub for each platform, utilizing the dart.library.io and dart/library/html flags
アバター
Introduction I am Hand-Tomi and I work on developing my route for Android at KINTO Technologies. It has been almost a year since Android 14 was released on April 12, 2023. However, I feel that the concept of "Regional Preferences" on Android remains unclear to many. That is why in this article I've chosen to delve into this topic. Developing multilingual applications without understanding "Regional Settings" can lead to the risk of encountering unforeseen bugs. I hope this article will be of help to readers mitigating these risks. Key Points Covered in This Article Locale.getDefault() == Locale.JAPAN :::details Code description Locale : Classes representing specific cultural and geographic settings based on language, country, or region Locale.getDefault() : Returns the default Locale for the current application Locale.JAPAN : Instances of locale representing Japanese language ( ja ) and country ( JP ) Preferences ::: Does the above code output true if the device is set to Japanese (Japan)? Or does it output false ? The correct answer is true on Android 13 and below, and unknownfor Android 14 and above with this much information . This article explains why it is unknown for Android 14 and above ! What is Locale on Android? Locale is a class that represents a cultural or geographic setting based on language, country, or region. Using this information, Android applications can be configured to adapt applications to diverse users. Locale deals mainly with languages and countries, but more data can be extracted by using LocalePreferences . val locale = Locale.getDefault() println("calendarType = ${LocalePreferences.getCalendarType(locale)}") println("firstDayOfWeek = ${LocalePreferences.getFirstDayOfWeek(locale)}") println("hourCycle = ${LocalePreferences.getHourCycle(locale)}") println("temperatureUnit = ${LocalePreferences.getTemperatureUnit(locale)}") If you execute the above code on a device with "Japanese (Japan)" Preferences, it will appear as follows. calendarType = gregorian : Calendar method = Gregorian calendar firstDayOfWeek = sun : First day of the week = Sunday hourCycle = h23 : Time cycle = 0-23 temperatureUnit = celsius : Temperature = Celsius What is "Regional Preferences"? Introduced in Android 14, the "Regional Preferences" feature allows you to customize the "temperature" and "First day of week" set by Locale (language and country). Temperature Use app default Celsius (°C) Fahrenheit (°F) First day of week Use app default Monday to Sunday Temperature setting screen First day of the week screen :::details How to go to the settings The "Regional Preferences" screen can be accessed from the "System" > "Language" section within the "Preferences App." ![setting](/assets/blog/authors/semyeong/2024-02-28-regional-preferences/setting.png =300x) ::: Why do we need "Regional Preferences"? Both the "United States" and the "Netherlands" can use English, but the unit of "temperature" used and the "first day of the week" are different. United States Netherlands Temperature Fahrenheit Celsius First day of week Sunday Monday If a Dutch person living in the United States is accustomed to Celsius and wants to change the temperature only to Celsius, "Regional Preferences" can be used to accomplish this. What changes would be made if you set "Regional Preferences"? Locale.getDefault().toString() To check the setting values, let's change each setting while using the code above. Language Temperature First day of week Results Japanese (Japan) Default Default ja_JP Japanese (Japan) Fahrenheit Default ja_JP_#u-mu-fahrenhe Japanese (Japan) Default Monday ja_JP_#u-fw-sun Japanese (Japan) Fahrenheit Monday ja_JP_#u-fw-sun-mu-fahrenhe Setting "Temperature" and "First day of the week" resulted in incomprehensible text output such as #u , mu-fahrenhe and fw-sun , which are member variables of Locale and localeExtensions< 5} values. Thus, if a value is set for localeExtensions , the results of hashCode and equals() for Locale will also change. Comparing to Locale.JAPAN does not result in true`. Then, how do we check the language? Locale.getDefault() == Locale.JAPAN // X Locale.getDefault().language == Locale.JAPANESE.language // O If you want to check the language, compare with the language property included in Locale . By using this method, I believe you can get the results you are looking for without being affected by changing the "Regional Preferences." Conclusion It is quite difficult to detect this change, even if the previously working code suddenly stops working due to the "Regional Preferences" feature secretly added from Android 14. Most people will have no problem, but if you are comparing languages in Locale instances, please ensure to check. If as many people as possible can find and solve such bugs quickly, this article will be a great success! Check out other articles written by my route team members! Structured Concurrency with Kotlin coroutines Jetpack Compose in myroute Android App A Beginner’s Story of Inspiration With Compose Preview Thank you for reading my article all the way to the end. *The Android robot was reproduced or modified from work created and shared by Google and used according to terms described in the Creative Commons 3.0 Attribution License.
アバター
Overview I am Cui from the Global Development Group at KINTO Technologies. I am currently the project manager of the Global KINTO App team, and previously the project manager for the back-office system developed by the Global Development Group. In this article, I will talk about Gitflow, a branch management method implemented within our back-office system development team to manage our source code. I think it can be applied to other products as well, so I hope this article serves you as a reference. Gitflow Note: In this article, I will only talk about Gitflow, which was adopted by our development team. In the following explanation, the branch name is written as "master," but when using GitHub, master is an old name, so the default branch is now "main." The roles are exactly the same. The overall diagram is as follows: Role of Each Branch master: A branch that manages released source code and has the same source version as the application running in production environment. Each release is tagged. develop: A branch that brings together the developed source code. It includes features that have not yet been released to the production environment and will always have the latest functionality. Typically, regression tests are deployed and performed on this branch. feature: A branch for development of new or modified features. It branches from the develop, and merges back into the develop branch after completion of integration testing. Generally, one feature branch can be created per user story, but the development team is free to decide. hotfix: A branch for bug fixes after release. It branches from the master branch, deploys to production environment on this branch after fixing bugs and passing tests. After production is completed, the branch is merged with both the master and develop branches. It merges in some release and feature branches as needed. release: A branch for product release. It branches from the develop branch with the feature to be released reflected. This branch is used to the production environment. When production is complete, merge in the master and develop branches and delete the branches. support: This branch is required for projects that must continue to support older versions. The support branch maintains and releases older versions. It is derived from the commit of the master branch of the version that needs support and independently bugfixes and releases until support is terminated. bugfix: In addition to the above five standard branch types, a branch type called bugfix is also defined. Details are described later, but if a bug is found prior to release, a bugfix branch is branched off from the release branch to deal with the fix. Development Flow (1) Initialization Create a develop branch from the master branch. Note: The master and develop branches will always exist as the main Gitflow branches, and once created, they cannot be deleted. (Set up on GitHub) (2) Development of new and modified features 1. Create a feature branch from the develop branch and start developing new and modified features. 2. Feature branch naming convention: feature/xxxx The "xxxx" can be decided by the development team.  Example: feature/GKLP-001, feature/refactoring, feature/sprint15 It is also recommended to create an additional working branch from the main feature branch in order to make pull requests and perform source reviews before integration testing. Specific patterns will be described later. 3. Commit source code revisions in the working branch, and when finished, submit a PR for review by others. 4. Once the source review is complete, merge it into the main functional branch and perform integration testing. 5. Once the integration test is complete, submit a PR to be merged into the develop branch and merge it.   Note: Please always check the merge timing, as there are times when development must not be merged into the develop branch even if development is completed, depending on the release plan. 6. Delete the feature branch after merging into the develop branch. Pattern No. 1: Functional branch and working branch In this pattern, all working branches off the functional branch are merged before the integration testing is performed. This pattern is appropriate when the development of a single feature is large and is expected to span multiple sprints. Pattern No. 2: Branch per sprint and working branch In this pattern, you are not limited to performing integration tests after all work branches have been merged, but you can also perform integration tests for a single feature in a sprint once the necessary development has been merged. This pattern is appropriate when the feature to be developed is small in scale and is expected to be completed within one sprint. Pattern No. 3 (Not recommended): Equating the functional branch with the working branch In this pattern, the timing of PR submission and integration test would not be clear, and the frequency of merges into develop would also be high, so it would be very cumbersome to make QA and release plans. We do not recommend such an approach that lacks planning. Instead, it is recommended to properly plan your releases during system development and operation and decide how to cut feature branches accordingly! (3) Release & Deployment Create a release branch from the develop branch. Tag the release branch. (See Tag naming convention below for naming convention.) When deploying to production environment is finished, merge the release branch into the master branch. Delete the release branch after the merge is complete. Release Plan For development that you plan to release into production environment, create a release plan as soon as possible. The operational rules of the feature branches and the timing of merging feature branches into develop are determined according to the release plan. The simplest release plan is to release all features that have been developed in the develop branch, which only requires the creation of a release branch. However, if multiple development teams are developing different features at the same time and plan to release them multiple times, you should create a release branch first and merge the targeted features one by one. For example, if features 1, 2, and 3 are developed simultaneously, but features 1 and 2 are released first and feature 3 is released a few weeks later: Once release branches such as release 1.0 and 2.0 above are created by branching off from the develop branch, the rule is that, in principle, modified source code from the develop branch should never be merged in again. The reason is that if there are multiple release plans, after the release branch is created, another feature may be merged into the develop branch, and if the feature is further merged from the develop branch, the feature will be mistakenly released even though it has not been tested. As shown in the figure below: Also, feature branches are not merged into develop immediately after development is completed. Once merged into develop, it will be included in the next release, so make sure to check the timing of merging feature branches into develop according to your release plan. If a bug is found prior to release Create a bugfix branch by branching off from the release branch. Then fix the bug, submit a PR and merge it into the release branch. Fixed bugs are reflected after release work when the release branch is merged into master and develop. As shown in the figure below: (4) Bug fix in production environment If a bug occurs in the production environment, follow the steps below to fix it. First, create a hotfix branch from the master branch. Tag the hotfix branch when you are done fixing it. (See Tag naming convention below for naming convention.) When deploying to production environment is complete, merge the hotfix branch into the master and develop branches. Delete the hotfix branch after the merge is complete. Maintenance branch The product's version-up policy ensures that it is versioned in units of microservices, and each major version has a certain maintenance period. Therefore, it is necessary to create a maintenance branch for each major version in the GitHub repository of the microservice. For example, the microservice "Automotive" has had three major versions released so far, V.1, 2, and 3, the maintenance branch would look like this: To make minor changes or fix bugs in an old major version, it is advisable to branch from the corresponding maintenance branch, but you can also make an appropriate release plan depending on the scale of development and decide on development and release branches. Branch Commit Rule There are two ways to merge a modified source code into a Git branch: to commit directly to submit a pull request and have a reviewer approve it before merging In principle, it is advisable to opt for the method of making a pull request and then merging. However, you may commit directly to the following branches: 1. Working feature branches for developing new and modified features 2. Bugfix branches for fixing bugs just before release 3. Hotfix branches for post-release bug fixes Tag Naming Convention Development environment 1.1 On GitHub, manually at release time (not recommended)  → Tag the git branch  Naming convention: x.x.x-SNAPSHOT  Example: 1.0.0-SNAPSHOT  → When registering to ECR, the image is automatically tagged according to tags and time.    Image tag name: x.x.x-SNAPSHOT_yyyyMMdd-hhmmss  Example: 1.0.0-SNAPSHOT-20210728-154024 1.2 Use JIRA tickets and automatically at release time (recommended)  → Do not tag the git branch.  → When registering to ECR, the image is automatically tagged according to current branch and time.  Image tag name: Branch name_yyyyMMdd-hhmmss  Example: develop-20210728-154024 Staging & Production Environment Manually tag the release or hotfix branch. Naming convention: release.x.x.x Example: release.1.0.0 Challenges solved by this Git branch strategy Our development team was launched a year ago. At the beginning, we encountered confusion regarding source code management, due to the diverse development experiences and backgrounds of our team members. There was also a "core" team of developers at our headquarters and a "star" team of developers offshore for our project. Although both teams work on the development of different functions, it is inevitable that the same source files are modified at the same time. Thus, the following problems occurred: Source code conflicts occurred when other people's updates were accidentally deleted Features were developed based on old source code Phased releases were not feasible We value teamwork in system development. Rules that are acknowledged and enforced by everyone are essential. This is exactly what Gitflow is all about. Each team member responsible for developing different features can create different feature branches and modify the sources without impacting on each other's work. Also, by keeping the latest source code in the develop branch according to the sprint development cycle, everyone will be able to deploy their own work based on the latest source code at the start of the next development cycle. In addition, by creating a release branch for each "release plan," the developed features can be released gradually, thus reducing the burden on developers and the risk of the project itself! With this Git branch strategy in place, the back-office system development team I previously led was able to overcome the chaos and stably develop and have features released! In the future, as I take on the role of project manager on the app development project, I aim to draw upon my experiences when encountering similar challenges in the future. Why not use it as a reference for your own product development?
アバター
I tried building an AWS Serverless Architecture with Nx and terraform! Hello. I'm Kurihara, from the CCoE team at KINTO Technologies and I’m passionate about creating DevOps experiences that bring joy to developers. As announced at the AWS Summit Tokyo 2023: our DBRE team’s approach to both agility and governance of our vehicle subscription service KINTO , is to deploy a platform that provides temporary jump servers (called “DBRE platform” from now on) across our company, triggered by requests from Slack. This DBRE platform is implemented using a combination of several AWS serverless services. In this article, we will introduce how we improved the developer experience by using a Monorepo tool called Nx and terraform. Our aim is to provide insights that can benefit anyone interested in adopting a Monorepo development approach, irrespective of their focus on serverless architectures. Background and Issues The architecture of our DBRE platform looks as follows: In addition to the above, there are about 20 Lambdas developed via Golang, Step Functions to orchestrate them, DynamoDB and EventBridge for scheduled triggers. The following issues and requests were raised in the development process. Integrate with “terraform plan” and “terraform apply workflows for secure deployment Incorporate appropriate static code analysis such as Formatter, Linter, etc. When considering the development of a serverless architecture, conventional choices like SAM or serverless framework come to mind. However,we decided against it because we wanted to implement IaC with Terraform and because Lambda functions developed in Golang lacked support. Let's look at the Terraform Lambda module. I thought that if I could make a proper Zip of the Lambda code to be referenced in Terraform, I could potentially resolve the issue of wanting to implement IaC with terrafrom. resource "aws_lambda_function" "test_lambda" { # If the file is not in the current working directory you will need to include a # path.module in the filename. filename = "lambda_function_payload.zip" function_name = "lambda_function_name" role = aws_iam_role.iam_for_lambda.arn handler = "index.test" source_code_hash = data.archive_file.lambda.output_base64sha256 runtime = "nodejs16.x" environment { variables = { foo = "bar" } } } Furthermore, consider the latter request to properly incorporate static code analysis. Serverless development is a combination of smaller code bases. In other words, we considered introducing the Monorepo tool with the idea that it would facilitate integration with development tools and keep build scripts simple by clearly defining the boundaries of the codebase group. What is a Monorepo tool? To get straight to the point, we took the decision to use a TypeScript-made Monorepo tool called Nx . We opted for Monorepo.tools primarily due to its extensive coverage of functions, as highlighted on the Monorepo tool comparison site. Additionally, its JavaScript-based architecture appealed to us as we thought it would be beneficial for scalability and accommodate future growth effectively. (Assuming the barriers of entry into the front-end community are low.) Examples will be given in the next chapter, but the premise is: What is Monorepo and what does Nx do? I will now explain briefly. Defining terms Let us take a moment to clarify how we've aligned the terms used in this document to match the conventions of Nx: Project : one repository-like bulk in monorepo (e.g. single Lambda code, common modules) Task : A generic term for the processes required to build an application, such as test, build, deploy, etc. What monorepo is It is described as a single repository where related projects are stored in isolated and well-defined relationships. In contrast, there is the multi-repository configuration often referred to in the Web realm as polyrepo. Source: monorepo.tools In summary, monorepo.tools offers the following advantages Atomic commits on a per-system basis Easy deployment of common modules (when a common module is updated, it can be used immediately without the need to import, etc.) Easier to be aware of the system as a whole, rather than vertically divided (in terms of mindset) Less workload required when setting up a new repository While AWS CDK isn't categorized as a Monorepo tool, it shares a similar philosophy regarding the management of IaC and application code, aligning with the trend observed with Monorepo to consolidate both infrastructure and application code within a single repository. We discovered that failures are often related to "out-of-band" changes to an application that aren't fully tested, such as configuration changes. Therefore, we developed the AWS CDK around a model in which your entire application is defined in code, not only business logic but also infrastructure and configuration. …and fully rolled back if something goes wrong. - https://docs.aws.amazon.com/cdk/v2/guide/best-practices.html What Nx can do Roughly speaking, if you define your own tasks and dependencies for each project , Nx will orchestrate the tasks. The following is an example of defining tasks and dependencies for a terraform project. When defined in this way, the plan-development will first build (compile and zip-compressed) the Lambda code with the defined dependencies, and then run terraform plan . fmt and test can also be defined simply as terraform project-specific tasks. By clarifying the responsibilities of each code base this way, we can improve the overall outlook of the code. It is possible to incorporate development tools suited to each development language on a project-by-project basis, and it is possible to build an appropriate development flow without having to rely on builders. Practical examples at KTC The following is an excerpt from the aforementioned DBRE platform, simplified and illustrated with practical examples. There are two Golang Lambda codes, both using the same common module. The Lambda code project is responsible for compiling its own code and creating a Zip file so that it can be deployed from terraform. The directory structure looks like this. Project Definition Project definitions for each of the above four projects are listed below. ①: Common modules In Golang, common modules only need to be referenced by the user, so builds are not required and only static analysis and UT are defined as tasks. projects/dbre-toolkit/lambda-code/shared-modules/package.json { "name": "shared-modules", "scripts": { "fmt-fix": "gofmt -w -d", "fmt": "gofmt -d .", "test": "go test -v" } } (2), (3): Lambda code By registering a common module as a dependent project, it is defined that if the code of the common module is changed, the task needs to be executed. The build task is responsible for executing the go build and zipping the generated binaries, which will later be used in the terraform project. projects//dbre-toolkit/lambda-code/lambda-code-01/package.json { "name": "lambda-code-01", "scripts": { "fmt-fix": "gofmt -w -d .", "fmt": "gofmt -d .", "test": "go test -v", "build": "cd ../ && GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o lambda-code-01/dist/main lambda-code-01/main.go && cd lambda-code-01/dist && zip lambda-code.zip main" }, "nx": { "implicitDependencies": [ "shared-modules" ] } } ④: IaC When plan-${env} or apply-${env} is executed, the build of the Lambda code specified in the dependency is executed first (the necessary zip is generated when plan or apply is executed) projects//dbre-toolkit/iac/package.json { "name": "iac", "scripts": { "fmt": "terraform fmt -check -diff -recursive $INIT_CWD", "fmt-fix": "terraform fmt -recursive -recursive $INIT_CWD", "test": "terraform validate", "plan-development": "cd development && terraform init && terraform plan", "apply-development": "cd development && terraform init && terraform apply -auto-approve" }, "nx": { "implicitDependencies": [ "lambda-code-01", "lambda-code-02" ], "targets": { "plan-development": { "dependsOn": [ "^build" ] }, "apply-development": { "dependsOn": [ "^build" ] } } } } From the terraform module, refer to the Zip file generated in the previous step as follows. local { lambda_code_01_zip_path = "${path.module}/../../../lambda-code/lambda-code-01/dist/lambda-code.zip" } # Redacted resource "aws_lambda_function" "lambda-code-01" { function_name = "lambda-code-01" architectures = ["x86_64"] runtime = "go1.x" package_type = "Zip" filename = local.lambda_code_01_zip_path handler = "main" source_code_hash = filebase64sha256(local.lambda_code_01_zip_path) } Task Execution Now that each project has been divided and tasks defined, we will look at task execution. In Nx, the run-many subcommand can be used to execute specific tasks for a specific project or for all projects. Based on dependencies, they are executed in parallel when possible, which also speeds up the process. nx run-many --target=<defined task name> --projects=<project name comma separated>. nx run-many --target= --all Example of executing plan-development for an iac project. Tasks with dependencies will execute tasks based on the defined dependencies. This is exactly the point I wanted to make. It will execute the tasks of the dependent project ahead of time, thus ensuring that the Lambda code is properly zipped when terraform is executed. $ nx run-many --target=plan-development --projects=iac --verbose > NX Running target plan-development for 1 project(s) and 2 task(s) they depend on: - iac —————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— > nx run lambda-code-01:build updating: main (deflated 56%) > nx run lambda-code-02:build updating: main (deflated 57%) > nx run iac:plan-development Initializing modules... Initializing the backend... Initializing provider plugins... - Reusing previous version of hashicorp/aws from the dependency lock file - Using previously-installed hashicorp/aws v4.39.0 terraform has been successfully initialized! --redacted } Plan: 0 to add, 2 to change, 0 to destroy. Example of executing the test task for all projects. No task dependencies, so everything runs in parallel Tasks with no dependencies, such as UT, can be executed in parallel. This allows for CI execution, as well as for development rules such as "Always run UT before pushing to GitHub" to be resolved with a single command. $ nx run-many --target=test --all --verbose > NX Running target test for 4 project(s): - lambda-code-01 - lambda-code-02 - shared-modules - iac —————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— > nx run shared-modules:test ? github.com/kinto-dev/dbre-platform/dbre-toolkit/shared-modules [no test files] > nx run lambda-code-01:test === RUN Test01 --- PASS: Test01 (0.00s) PASS ok github.com/kinto-dev/dbre-platform/dbre-toolkit/lambda-code-01 0.255s > nx run iac:test Success! The configuration is valid. > nx run lambda-code-02:test === RUN Test01 --- PASS: Test01 (0.00s) PASS ok github.com/kinto-dev/dbre-platform/dbre-toolkit/lambda-code-02 0.443s —————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— > NX Successfully ran target test for 4 projects > nx run lambda-code-02:test Powerful features of Nx and Monorepo tools We hope you can see how tasks can be orchestrated by properly defining the project. However, this alone is no different from a regular task runner, so here are some of the major advantages of using the Nx and Monorepo tools. Execute tasks only for changed projects The fastest task execution is to not execute the task in the first place. A mechanism called the affected command, which performs tasks only for the changed project, is available for fast completion of CI. The following is the command syntax By passing two Git pointers, it will only execute tasks in the project that have changed between the two pointers. nx affected --target=<task name> --base=<two dots diff of base> --head=<two dots diff of head> # State with changes only in lambda-code-01 $ git diff main..feature/111 --name-only projects/dbre-toolkit/lambda-code/lambda-code-01/main.go $ nx affected --target=build --base=main --head=feature/111 --verbose > NX Running target build for 1 project(s): - lambda-code-01 ———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— > nx run lambda-code-01:build updating: main (deflated 57%) ———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— > NX Successfully ran target build for 1 projects If there is a change in the project on which it depends on, it will execute tasks based on the dependencies. # State with changes only in shared-module $ git diff main..feature/222 --name-only projects/dbre-toolkit/lambda-code/shared-modules/utility.go # Tasks in projects that depend on shared-module are executed $ nx affected --target=build --base=main --head=feature/222 --verbose > NX Running target build for 2 project(s): - lambda-code-01 - lambda-code-02 ———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— > nx run lambda-code-01:build updating: main (deflated 56%) > nx run lambda-code-02:build updating: main (deflated 57%) ———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— > NX Simplifying the CI/CD pipeline If task names do not change, the CI/CD pipeline does not need to be changed as projects are added, thus lowering maintenance costs. In addition, the affected command described above can speed up the CI/CD process (since it only executes tasks for the changed project). Below is an example of CI for GitHub Actions. name: Continuous Integration on: pull_request: branches: - main - develop types: [opened, reopened, synchronize] jobs: ci: runs-on: ubuntu-latest steps: -uses: actions/checkout@v3 with: fetch-depth: 0 # --immutable option to have the fixed version of dependencies listed in yarn.lock installed - name: install npm dependencies run: yarn install --immutable shell: bash - uses: actions/setup-go@v3 with: go-version: '^1.13.1' - uses: hashicorp/setup-terraform@v2 with: terraform_version: 1.3.5 - name: configure AWS credentials uses: aws-actions/configure-aws-credentials@v1-node16 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: 'ap-northeast-1' # Task execution part is completed with this amount of description - name: format check run: nx affected --verbose --target fmt --base=remotes/origin/${{ github.base_ref }} --head=remotes/origin/${{ github.head_ref }} - name: test run: nx affected --verbose --target test --base=remotes/origin/${{ github.base_ref }} --head=remotes/origin/${{ github.head_ref }} - name: build run: nx affected --verbose --target build --base=remotes/origin/${{ github.base_ref }} --head=remotes/origin/${{ github.head_ref }} - name: terraform plan to development run: nx affected --verbose --target plan-development --base=remotes/origin/${{ github.base_ref }} --head=remotes/origin/${{ github.head_ref }} Combine with Git Hook for even greater productivity I'd like to see at least static analysis and Unit Test done locally before pushing with Git. Development rules such as 'Git history is dirty too' can be easily solved. By combining the --files and --uncommitted options of the affected command with the Git Hook, only the project to which the changed files belong can be targeted , minimizing the developer's stress (and time spent on execution). For example, the following affected command can be included in the pre-commit hook to keep the commit history clean and reduce review noise. nx affected --target lint --files $(git diff --cached --name-only) : nx affected --target unit-test --files $(git diff --cached --name-only) nx affected --target fmt-fix --files $(git diff --cached --name-only) Other Benefits Task execution results are cached if the project code has not changed The results of task execution are cached, both in the generated files and standard output/errors. (For more information, click here .) $ tree .nx-cache/ ├── ce36b7825abacc0613a8b2c606c65db6def0e5ca9c158d5c2389d0098bf646a1 │   ├── code │   ├── outputs │   │   └── projects │   │   └── dbre-toolkit │   │   └── lambda-code │   │   └── lambda-code-01 │   │   └── dist │   │   ├── lambda-code.zip │   │   └── main │   └── terminalOutput ├── ce36b7825abacc0613a8b2c606c65db6def0e5ca9c158d5c2389d0098bf646a1.commit ├── nxdeps.json ├── run.json └── terminalOutputs ├── 1c9b46c773287538b1590619bfa5c9abf0ff558060917a184ea7291c6f1b988c ├── 6f2fbb5f2dd138ec5e7e261995be0d7cddd78e7a81da2df9a9fe97ee3c8411c5 ├── 88c7015641fa6e52e0d220f0fdf83a31ece942b698c68c4455fa5dac0a6fd168 ├── 9dc8ebe6cdd70d8b5d1b583fbc6b659131cda53ae2025f85037a3ca0476d35b8 ├── c4267c4148dc583682e4907a7692c2beb310ebd2bf9f722293090992f7e0e793 ├── ce36b7825abacc0613a8b2c606c65db6def0e5ca9c158d5c2389d0098bf646a1 ├── db7e612621795ef228c40df56401ddca2eda1db3d53348e25fe9d3fe90e3e9a1 ├── dc112e352c958115cb37eb86a4b8b9400b64606b05278fe7e823bc20e82b4610 └── eb94fd3a7329ab28692a2ae54a868dccae1b4730e4c15858e9deb0e2232b02f3 If this caching mechanism is also integrated into our CI/CD pipeline, it optimizes processing tasks during code reviews. For instance, when only a portion of the code requires modification, the cache can expedite most CI processes for the updated push, thereby enhancing development efficiency. - name: set nx cache dir to environment variables id: set-nx-version run: | echo "NX_CACHE_DIRECTORY=$(pwd)/.nx-cache" >> $GITHUB_ENV shell: bash # Register nx cache to GitHub cache - name: nx cache action uses: actions/cache@v3 id: nx-cache with: path: ${{ env.NX_CACHE_DIRECTORY }} key: nx-cache-${{ runner.os }}-${{ github.sha }} restore-keys: | nx-cache-${{ runner.os }}- The graph command allows visualization of project dependencies Even though the boundaries of the code base have been clarified, there are still times when you want to check dependencies comprehensively. A graph subcommand is maintained to visualize dependencies between projects. One of the benefits of Nx is its ability to handle such tasks. Current status of DBRE platform The DBRE platform currently has 28 projects with Monorepo. In the above example, the number of projects was small, so it may have been difficult to understand their benefits, but with a scale of this extent, the benefits of the affected commands come through like shining stars. $ yarn workspaces list --json {"location":".","name":"dbre-platform"} {"location":"dbre-utils","name":"dbre-utils"} {"location":"projects/DBREInit/iac","name":"dbre-init-iac"} {"location":"projects/DBREInit/lambda-code/common","name":"dbre-init-lambda-code-common"} {"location":"projects/DBREInit/lambda-code/common-v2","name":"dbre-init-lambda-code-common-v2"} {"location":"projects/DBREInit/lambda-code/push-output","name":"dbre-init-lambda-code-push-output"} {"location":"projects/DBREInit/lambda-code/s3-put","name":"dbre-init-lambda-code-s3-put"} {"location":"projects/DBREInit/lambda-code/sf-check","name":"dbre-init-lambda-code-sf-check"} {"location":"projects/DBREInit/lambda-code/sf-collect","name":"dbre-init-lambda-code-sf-collect"} {"location":"projects/DBREInit/lambda-code/sf-notify","name":"dbre-init-lambda-code-sf-notify"} {"location":"projects/DBREInit/lambda-code/sf-setup","name":"dbre-init-lambda-code-sf-setup"} {"location":"projects/DBREInit/lambda-code/sf-terminate","name":"dbre-init-lambda-code-sf-terminate"} {"location":"projects/PowerPole/iac","name":"powerpole-iac"} {"location":"projects/PowerPole/lambda-code/pp","name":"powerpole-lambda-code-pp"} {"location":"projects/PowerPole/lambda-code/pp-approve","name":"powerpole-lambda-code-pp-approve"} {"location":"projects/PowerPole/lambda-code/pp-request","name":"powerpole-lambda-code-pp-request"} {"location":"projects/PowerPole/lambda-code/sf-deploy","name":"powerpole-lambda-code-sf-deploy"} {"location":"projects/PowerPole/lambda-code/sf-notify","name":"powerpole-lambda-code-sf-notify"} {"location":"projects/PowerPole/lambda-code/sf-setup","name":"powerpole-lambda-code-sf-setup"} {"location":"projects/PowerPole/lambda-code/sf-terminate","name":"powerpole-lambda-code-sf-terminate"} {"location":"projects/PowerPoleChecker/iac","name":"powerpolechecker-iac"} {"location":"projects/PowerPoleChecker/lambda-code/left-instances","name":"powerpolechecker-lambda-code-left-instances"} {"location":"projects/PowerPoleChecker/lambda-code/sli-notifier","name":"powerpolechecker-lambda-code-sli-notifier"} {"location":"projects/dbre-toolkit/docker-image/shenron-wrapper","name":"dbre-toolkit-docker-image-shenron-wrapper"} {"location":"projects/dbre-toolkit/iac","name":"dbre-toolkit-iac"} {"location":"projects/dbre-toolkit/lambda-code/dt-list-dbcluster","name":"dbre-toolkit-lambda-code-dt-list-dbcluster"} {"location":"projects/dbre-toolkit/lambda-code/dt-make-markdown","name":"dbre-toolkit-lambda-code-dt-make-markdown"} {"location":"projects/dbre-toolkit/lambda-code/utility","name":"dbre-toolkit-lambda-code-utility"} IaC in terraform is also divided into four projects in component units. This ability to easily split up projects allows each code base to remain slim in size, even in a single repository. The affected command also allows CI/CD to be completed faster, increasing productivity without reducing the development experience. $ yarn list-projects | grep iac {"location":"projects/DBREInit/iac","name":"dbre-init-iac"} {"location":"projects/PowerPole/iac","name":"powerpole-iac"} {"location":"projects/PowerPoleChecker/iac","name":"powerpolechecker-iac"} {"location":"projects/dbre-toolkit/iac","name":"dbre-toolkit-iac"} Issues We will also present the challenges we faced in completing this development architecture and how we solved them. As mentioned in the introduction, zipping the Lambda code was an important point, but unless the execution environment and zip metadata (update date, etc.) were completely the same, differences would be detected in terraform even if the code was unchanged. The solution was to build and zip the code in the container and call it from the task definition. Dockerfile FROM golang:1.20-alpine RUN apk update && \ apk fetch zip && \ apk --no-cache add --allow-untrusted zip-3.0-r*.apk bash COPY ./docker-files/go-single-module-build.sh /opt/app/go-single-module-build.sh ./docker-files/go-single-module-build.sh #!/bin/bash set -eu -o pipefail while getopts "d:m:b:h" OPT; do case $OPT in d) SOURCE_ROOT_RELATIVE_PATH="$OPTARG" ;; m) MAIN_GO="$OPTARG" ;; b) BINARY_NAME="$OPTARG" ;; h) help ;; *) exit ;; esac done shift $((OPTIND - 1)) cd "/opt/mounted/$SOURCE_ROOT_RELATIVE_PATH" || exit 1 rm -f ./dist/* CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o "./dist/$BINARY_NAME" "$MAIN_GO" cd ./dist || exit 1 # for sha256 diff chown "$HOST_USER_ID":"$HOST_GROUP_ID" "$BINARY_NAME" touch --no-create -t 01010000 "$BINARY_NAME" ./*.tmpl zip "$BINARY_NAME.zip" "$BINARY_NAME" ./*.tmpl chown -R "$HOST_USER_ID":"$HOST_GROUP_ID" ../dist There are other issues as well, such as the current lack of local execution. I would like to try to make not only terraform but also SAM and cdk into Monorepo in the future. Summary In this article, we introduced the powerful features of Nx based on the introduction of how to manage AWS serverless using the Monorepo tool. If this sounds like something you would like to do, would you like to consider working with us at the Platform Group? Thank you for reading.
アバター