TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

936

こんにちは、ヒロヤ (@___TRAsh) です🎅 今年のモバイルアプリ開発グループはアウトプットに力を入れた年でした。 iOSDCやDroidKaigiでのスポンサードや外部登壇、このテックブログの執筆など、様々な形でアウトプットを行っていて、みなさんにも知っていただける機会が増えたかなと感じています。 今年の最後の締めくくりとして、 KINTO TechnologiesのAdvent CalendarでAndroid/Flutter/iOSで1シリーズ投稿します🎉 https://qiita.com/advent-calendar/2024/kinto-technologies 弊社のAndroid/Flutter/iOSエンジニアが頑張って1シリーズ書き切るので、ぜひチェックしてみてください🎅 本日はそんなAndroid/Flutter/iOSのアプリ開発をしているモバイルアプリ開発グループのことを紹介させていただきます。 モバイルアプリ開発グループとは? 弊社KINTOテクノロジーズのモバイルアプリを、iOS、Android、Flutterで横断的に開発しているグループです。 主には以下のプロダクトの開発をしています。 https://kinto-jp.com/entry_app/ https://kinto-jp.com/unlimited/app/ https://top.myroute.fun/ https://ppap.kinto-jp.com/prismjapan 上記以外にもPoCの要望などを受けた開発も行っています。 また、業務以外にも横軸組織という利点を活かし、社内勉強会も頻繁に行っており、新しい技術や知識の共有を積極的に行っています。 メンバーのみんなにアンケートを取りました 今回は弊社のエンジニアにアンケートを取ってみました。 普段の業務ではなかなか知ることができない情報を取得できたので、ここで共有させていただきます。 1. あなたの開発環境は?(最大2つまで) ![開発環境円グラフ](/assets/blog/authors/HiroyaHinomori/mobile_advent_calendar_2024_12_01_01.png =450x) Androidの割合が多いのは、弊社はAndroidエンジニアが多いことが要因ですね。 国内では結構珍しいんじゃないかと思います。 また、今年からFlutterチームができました!少しづづFlutterに関するアウトプットも出していければと思います。 2. 開発年数を教えてください ![開発年数円グラフ](/assets/blog/authors/HiroyaHinomori/mobile_advent_calendar_2024_12_01_02.png =450x) 弊社は中途採用がメインなので、開発年数が長い方が多いですね。 10年以上の方がこんなにいるのは今回初めて知りました。 これからも経験豊富なエンジニアの方々から学び続けていきたいです。 3. 出身地域を教えてください ![出身地域円グラフ](/assets/blog/authors/HiroyaHinomori/mobile_advent_calendar_2024_12_01_03.png =450x) 薄々気づいていたんですが、日本人が半分も居ないのはかなり珍しい現場じゃないかと思います。 現場では基本的にみなさん日本語で話していますが、ところにより英語や、中国語、韓国語が話されるグローバルな環境です。 4. モバイルアプリ開発グループの良いところがあれば教えてください 頂いたコメントを元にワードクラウドを作成しました。 雰囲気と技術が目立ちますね。 日々の業務だけでは疎かになりがちなプロダクト間のコミュニケーションを大切にして技術共有を行なっている成果かなと感じます。 この調子で来年も頑張っていきます💪 :::details Summary 技術と学習環境 最新技術への挑戦:自由度が高く、新しい技術の導入や利用に対してオープンな環境。 スキル向上の支援:学びやすい環境で、勉強会や知識共有が積極的に行われている。 スキルレベルの高さ:メンバー全体の技術レベルが高く、成長意欲が強い。 アウトプットを重視:成果を出す努力を惜しまない姿勢がある。 コミュニケーションと雰囲気 親しみやすい雰囲気:メンバー同士が親切で、質問や相談がしやすい。 協力的なチーム:プロジェクトチーム内外での連携がスムーズで、協力し合える風土。 多様性とユーモア:多国籍で個性豊かなメンバーが集まり、文化の違いも楽しめる環境。 上下関係の壁が少ない:年齢やバックグラウンドに関わらずフラットなコミュニケーション。 働きやすさ 柔軟で自由な働き方:自由奔放で、それぞれのスタイルを尊重。 良好なチームの雰囲気:みんな仲が良く、協力し合う文化が根付いている。 優しい雰囲気:親しみやすく、安心して働ける環境が整っている。 これらの特徴から、学びながら成長し、多様性と協調性を楽しめる理想的なチーム環境と言えます。 ::: 5. 今最も関心のある技術を教えてください iOS Android こちらもいただいたコメントを元にワードクラウドを作成しました。 Android、iOSともにKMPが注目されていますね。FlutterやCompose Multiplatformなどのワードも見えるので、クロスプラットフォームに関心がある方が多いようです。 僕の体感としても今年はクロスプラットフォームが躍進してきたなと感じます。 あとはそれぞれ、言語の技術的な進化にも関心があるようです。 また、AI周りの技術も注目されているのは昨今のトレンドを反映していて、メンバーの技術関心が高いことがわかります。 :::details Summary iOS トップ3技術 Swift / SwiftUI Appleプラットフォームの主要技術で、特にUI構築への関心が高い。 KMP(Kotlin Multiplatform) マルチプラットフォーム開発でiOS側にも活用。 AI(MLC LLM、Apple Intelligence) 機械学習やAppleのAI技術への注目。 Android トップ3技術 Jetpack Compose UI構築技術の中心。効率的なコード記述や熟練に向けた研究が活発。 KMP(Kotlin Multiplatform) Androidアプリ開発での活用やCompose Multiplatformとの連携が注目される。 Flutter クロスプラットフォーム開発の選択肢として人気。 ::: まとめ モバイルアプリ開発グループは技術と学習環境、コミュニケーションと雰囲気、働きやすさ、これらの観点から学びながら成長し、多様性と協調性を楽しめる環境を作っていけているかなと感じます。 モバイルの技術は日々トレンドが変化する業界でもあるので、これからもトレンドのキャッチアップを行いながら、グループでの成長を目指していきたいです。 最後に、この記事を読んでいただいた方々にも、モバイルアプリ開発グループの魅力を感じていただけたら幸いです🎅 それでは、明日からの弊社のAdvent Calendarもお楽しみに🎄
アバター
この記事は KINTOテクノロジーズアドベントカレンダー2024 の 1 日目の記事です🎅🎄 はじめに こんにちは、KINTO テクノロジーズ ( 以下、KTC ) の SCoE グループの多田です。SCoE は「Security Center of Excellence」の略で、少し耳慣れない方もいらっしゃるかもしれません。KTC では、今年の 4 月に CCoE チームを SCoE グループとして再編しました。再編の経緯については こちらのブログ にまとめてありますので、ぜひご覧ください。また、私は大阪オフィスである Osaka Tech Lab に勤務しており、Osaka Tech Lab についても こちらのブログ でご紹介していますので、ぜひ覗いてみてください。 KTC では、多くのプロダクション環境を Amazon Web Services ( 以下、AWS ) 上で運用していますが、最近では OpenAI の活用に伴い、Microsoft Azure ( 以下、Azure ) の利用も増えてきました。 SCoE のタスクのひとつに、グループポリシーに基づくセキュリティ設定を事前に実施した上で環境を提供することがあります。本ブログでは、Azure サブスクリプションを提供する際に行っているセキュリティ設定について、いくつかご紹介したいと思います。 Azure 特有の用語が登場しますので、詳細については公式サイトなども合わせてご確認ください。 Azure ランディングゾーンと管理グループの設計 セキュリティ設定を考える上で、まずはランディングゾーンと管理グループについて理解することが重要です。KTC のサブスクリプション環境は Azure ランディングゾーンの設計原則に基づいて設計・構築しています。ただし、 Microsoft の公式ランディングゾーン をそのまま使用するのではなく、ベストプラクティスを参考にしつつ、KTC の環境に合わせてライトに設計しています。 ランディングゾーン内では、サブスクリプションを論理的にまとめて効率的に管理するために、いくつかの管理グループを設計しています。以下の図はその概要です。これらの管理グループを使用し、各サブスクリプションに適切なポリシーを適用しています。 管理グループ 概要 KTC 管理グループの root、各管理グループの共通となるポリシーを適用 Management 全サブスクリプションのActivity Log の集約用のサブスクリプションなど、セキュリティ系で利用するサブスクリプションを管理 Workload ワークロード用のサブスクリプションを管理 Sandbox Sandbox用のサブスクリプションを管理 PolicyStaging Azure ポリシーのテストを行うための管理グループ、サブスクリプションを管理 ポイントとして、ワークロード用の管理グループは 1 つに統一しています。この管理グループにはプロダクト用のサブスクリプションが含まれ、1 つのサブスクリプション内で 本番・開発・ステージング環境をリソースグループ単位で分離しています。 環境分離の設計には様々なアプローチがありますが、KTC ではワークロードが多くならないこと、特定の Azure サービスに限定していること、サブスクリプション単位の費用管理が容易であることから、この形でスタートしました。将来的に Azure の利用が増えれば、再検討も視野に入れています。 Management 管理グループの役割 Management 管理グループは、全サブスクリプション共通の運用管理やセキュリティツール展開用のサブスクリプションを集約するための管理グループです。運用監視を担当するメンバーのみがアクセスできるようにしており、例えば、全サブスクリプションの Activity Log を集約・監視するサブスクリプションをここで管理しています。 Azure ポリシーを利用したセキュリティ設定 Azure ポリシー を利用することで、セキュリティやガバナンスに沿ったリソースの作成が可能で、違反があれば検出・修復もできます。KTC でも Azure ポリシーを使用し、サブスクリプション作成時に自動でセキュリティ設定を適用しています。現在はビルトインのポリシーのみを使用しており、カスタムポリシーの作成までは実施していません。今後、ワークロードが増えるなど環境が変われば検討していきたいと思います。 以下は代表的な Azure ポリシーを活用した設定例です。 Activity Log の監視と保管 Defender for CSPM の設定と利用 Azure ポリシーは、予防的ガードレールとして、KTC では過度に適用する方針を取っていません。これは、KTC のワークロードが比較的少ないことや、エンジニアのスキルレベル、運用コスト等を考慮した結果です。厳格な予防的ガードレールで制約を増やすよりも、一定の自由度をエンジニアに委ね、発見的ガードレールで検出された内容をカイゼンするアプローチをとっています。これにより、エンジニアが問題解決を通じてスキルを磨き、興味を持って成長できるよう意図しています。 Activity Log の監視と保管 各サブスクリプションの Activity Log は、Management サブスクリプションの Log Analytics ワークスペースに集約しています。サブスクリプションが新規で追加された場合でも、Azure ポリシーによって自動的に Audit Log が集約されるよう設定しています。 利用している Azure ポリシーは以下となります。 指定された Log Analytics ワークスペースにストリーミングするように Azure アクティビティログを構成します Log Analytics の保管期間はデフォルトで 90 日なので、ストレージアカウントにバックアップを保管していますが、こちらは Azure ポリシーには設定がなく、手動で行っています。カスタムポリシーを作成することで、自動設定できることは確認しているのですが、そこまでは実施していません。 Defender for CSPM の設定と利用 発見的ガードレールと呼ばれますが、Azure 環境にリスクのある設定や操作が行われた場合に、Cloud Security Posutre Management ( CSPM ) のソリューションを利用して、これらのリスクを検知します。Azure の場合、 Microsoft Defender for Cloud が CSPM として利用できます。Microsoft Defender for Cloud は、CNAPPと呼ばれるクラウドネイティブアプリケーション保護プラットフォーム ( Cloud Native Application Platfrom ) のためのソリューションであり、CSPM や Cloud Workload Protection Platform ( CWPP ) 等をカバーするセキュリティソリューションです。 Microsoft Defender for Cloud の CSPM 機能は、無料の Foundational CSPM と サーバ、データベース、ストレージなどのリソースに対して費用が発生する Defender CSPM があります。KTC の場合、より詳細な CSPM のチェックが可能な Defender CSPM を利用しています。 Defender CSPM は、以下の Azure ポリシーを利用して、サブスクリプション発行時に自動設定しています。 Microsoft Defender CSPM を有効にするように構成する 設定後は、定期的に Microsoft Defender for Cloud からアラート状況を監視し、リスクのある設定があれば、サブスクリプションを利用しているプロダクト側と連携し、リスクのカイゼンを行います。 クラウドワークロード保護については、今の時点では実施しておらず、今後、リソースが増えるなどに応じて検討していきたいと思います。 脅威検知 Azure 環境のセキュリティインシデントや不正アクセスを早期に発見するために、脅威検知の仕組みを導入しています。KTC でもそうですが、多くの AWS 導入会社であれば、Amazon GuardDuty で実現している仕組みだと思います。Azure の場合、 Microsoft Sentinel を使うことが鉄板のようですが、KTC の環境の場合、導入の手間や費用面を考慮し、サードパーティ製品の sysdig の CDR ( Cloud Detection Response ) 機能を使って実現しています。 CDR の実態は、OSS の Falco です。Falco は、ホスト、コンテナ、Kubernetes、クラウド環境全体に対して、異常な振る舞いや潜在的なセキュリティ脅威等の違反を検出し通知します。脅威検知ルールは、一般的なものが提供されており、カスタマイズやチューニングも可能で使い勝手がよいです。 KTC では、sysdig を Google Cloud 環境の CSPM や脅威検知として既に利用していたので、そのノウハウを Azure にも適用しています。 まとめ KTC では、Azure サブスクリプションを提供する際に行っているセキュリティ設定について、いくつかをご紹介しました。セキュリティを強化するために、Azure ポリシー、Microsoft Defender for Cloud や sysdig の CDR 機能を活用しています。 Azure ポリシーを利用したセキュリティ設定 に記載しましたが、予防的ガードレールをどこまで厳格にするかは、各社の状況によるものが大きいと思いますので、自社の状況にあわせて最適な設計・運用をしていただくのが良いと思います。 この内容が、Azure を利用するさいの、セキュリティ設定の参考になれば幸いです。 最後まで、読んでいただきありがとうございました。 さいごに SCoE グループでは、一緒に働いてくれる仲間を募集しています。クラウドセキュリティの実務経験がある方も、経験はないけれど興味がある方も大歓迎です。お気軽にお問い合わせください。 詳しくは、 こちらをご確認ください。
アバター
この記事は KINTOテクノロジーズアドベントカレンダー2024 の1日目の記事です🎅🎄 こんにちは!リナ( @chimrindayo )です。 KINTOテクノロジーズで、エンジニアとして モビリティマーケット の開発運用と技術広報を兼務しています。 今回は株式会社Luupの t-kurimuraさん と一緒に結成した"Mobility Night"という勉強会をご紹介します🙌 Mobility Nightとは 引用元: t-kurimuraさんの作成資料 モビリティに関連する企業や団体がソフトウェアの技術や知見について共有し、業界を盛り上げていくための勉強会です🚀 例えばGPS・IoT・品質保証・プロダクトデザインなど、モビリティを取り扱うソフトウェアの技術的な課題には共通点があると考えています。こうしたモビリティ業界ならではの知見を共有することで、モビリティ業界全体のソフトウェア技術の発展やプロダクトの全体的な向上を願って結成されました。 "Mobility Night"という命名の由来は、みんなが集まってカジュアルに情報発信と交流ができる場になって欲しいという思いを込めています。 クローズドイベントの開催 初回は、 Mobility Night#0 (第0回)と称してクローズドの勉強会を開催しました。 登壇企業である株式会社Luup、チャリチャリ株式会社、GO株式会社、newmo株式会社をはじめとしたモビリティ企業に所属のみなさまにお声がけし、それぞれの事業・プロダクトの紹介を中心にモビリティにまつわる情報を互いに共有しました。 まずは今後オープンな勉強会を開催するにあたって、モビリティ業界の技術勉強会を開催すること自体に共感いただけるのかどうか、そして今後どんなテーマで勉強会を開催すれば有意義な時間になるか共通点を探りたいという考えから、クローズドイベントの開催に踏み切りました。 クローズドイベントの開催結果 ありがたいことに、クローズドイベントは大変好評かつ盛況だったと思います。 どれくらい盛り上がったかというと...そのまま2次会に行く人がいたぐらいです🍻 同じモビリティ業界の企業で働く者同士、モビリティ業界の動向や近未来のモビリティについて熱く語り合うことができたのではないかと感じています🔥 ここで勉強会のアンケートにご記入いただいた感想の中から一部をご紹介します。 モビリティ業界の情報収集が有益でした 業界特化した勉強会もいいものですね モビリティを軸に集まっているので全ての話が興味深く聞けました! 共有される情報が近しく、今後も繋がりたいと思った なんかホーム感がある 今後のMobility Night まずは継続開催を目指して、隔月(偶数月)で勉強会を開催予定です。 そして、現時点ではイベントの開催初期ということもあり、運営メンバーがお声がけさせていただいた企業のみなさまにご登壇いただいておりますが、今後は運営メンバーからのお声がけの有無に関わらず、登壇しやすい雰囲気を作っていきたいです! ぜひ、登壇したい方がいればお気軽にconnpassからのエントリーお待ちしております🙌 (登壇人数が多い場合は、抽選させていただく場合がございます。) また、Mobility Nightの最新情報は、Discordで公開しております。 「Mobility Nightに参加したい」「登壇したいけど事前に相談したい」「運営メンバーとして参加したい」など、Mobility Nightに少しでもご興味があるみなさまは、ぜひDiscordにご参加ください! 私個人としては、Mobility Nightの情報共有だけでなく、モビリティに関する勉強会の共催募集などもできる場になるといいなと考えています。 https://discord.gg/nn7QW5pn8B 次回開催のお知らせ Mobility Night #1 を以下の日程で開催いたします! https://mobility-night.connpass.com/event/334400/ 日時 2024年12月5日(木) 18:30~ テーマ GPS・位置情報 会場 KINTOテクノロジーズ 室町オフィス 現在モビリティ業界の企業・団体に属しているか否かに関わらず、 モビリティ業界に興味があるみなさまのご参加いただきたいと思っております! ご興味がある方は、残席わずかのためお早めにお申し込みくださいませ。 当日のみなさまのご参加を心よりお待ちしております。
アバター
Introduction Hello! This is Iki from the Cloud Infrastructure Team of the Platform Group (in the Osaka Tech Lab) at KINTO Technologies. I heard that a skilled young team member from Osaka will be writing about troubleshooting CloudFront Functions, so I’ll cover CloudFront edge functions as some foundational knowledge in advance! Let's start with an overview of CloudFront CloudFront is a content delivery network (CDN), designed to bring content closer to users by strategically placing CDN points worldwide and caching content at these locations. Users can enjoy low-latency access by connecting to the nearest CDN point. There are two types of edge locations: one closer to the user and a regional edge cache (regional edge location) situated closer to the origin server. What is an Edge Function? An edge function is a function that runs on an edge server, processing traffic at the location where it is received. By using edge functions, you can execute operations at the time of request or response, and in our company we mainly implement and operate the following functions: Redirect the URL of the response Add header Resize the image according to the request parameters When to Run an Edge Function An edge function can be run at the following four times. Viewer request Viewer response Origin request Origin response With CloudFront, the viewer manages all communication, so if there’s no cached content, the origin handles common processing. This setup is useful for controlling information sent to the origin, resizing cached data, and other optimizations. Types of Edge Functions There are two types of edge functions: CloudFront Function and Lambda@Edge. CloudFront Function An edge function that runs at an edge location close to the user. It performs large-scale latency-sensitive CDN customizations. CloudFront Functions are ideal for simple tasks like header manipulation and redirection, and they cost less than one-sixth of Lambda@Edge. Since it runs at the edge location closest to the user, it responds to viewer requests and responses, but not origin requests and responses. Lambda@Edge Edge functions that run in the region edge cache close to the origin. For processing tasks that CloudFront Functions cannot handle, you can use other AWS services, including the AWS SDK, and access file systems by leveraging Lambda@Edge or similar AWS services. Lambda@Edge is an extension of AWS Lambda, and although it appears the same on the console, it has some functional limitations—such as not allowing user-defined environment variables. Please keep these restrictions in mind. While the Lambda@Edge function itself is stored in the Virginia region, it operates by creating replicas in various edge locations, allowing it to run within each regional edge cache. As a result, the concurrent Lambda execution limit and access limits for each service apply in the specific region where the function runs (such as the Tokyo region in Japan). Be mindful of these limits to ensure smooth operation. It supports viewer requests, viewer responses, origin requests, and origin responses. The Differences between CloudFront Function and Lambda@Edge CloudFront Functions Lambda@Edge Programming language JavaScript Node.js / Python Event source Viewer request Viewer response Viewer request Viewer response Origin request Origin response Scale Number of requests: More than 10,000,000 per second Number of requests: Up to 10,000 per second per region The duration of the function Less than 1ms Viewer: 5 seconds Origin: 30 seconds Maximum memory 2 MB 128 ~ 3,008 MB Maximum size of the function code and included libraries 10KB Viewer: 1MB Origin: 5MB Network access No Yes Access to file systems No Yes Access to the request body No Yes Access to location and device data Yes Viewer request: No Viewer response: Yes Origin request: Yes Origin response: Yes Quote: CloudFront Functions and Lambda@Edge selection Usage of Edge Functions in KINTO Technologies KINTO Technologies, though not yet fully utilizing it, recommends CloudFront Functions. They can help manage costs and reduce concurrent Lambda executions for viewer requests and responses, especially in high-traffic environments. Besides, CloudFront Functions have more limitations compared to Lambda@Edge. Therefore, we prioritize reducing development and operational costs by using Lambda@Edge, rather than focusing solely on minimizing daily AWS expenses. Conclusion In this post, I discussed CloudFront edge functions (CloudFront Function and Lambda@Edge). By understanding and leveraging edge functions, you can enhance your system’s capabilities and improve the user experience. However, given their strict limitations, any mistakes can result in errors and unexpected outcomes. I hope this article has provided valuable insights and will be beneficial for your development work. Stay tuned for an upcoming post on troubleshooting CloudFront Function! We're also seeking new team members to join us at the Platform Group (Osaka Tech Lab), so don't hesitate to reach out! KINTO Technologies Corporation Platform G Recruitment Top   wantedly
アバター
Kotlin / Ktorで作るクラウドネイティブなマイクロサービス(オブザーバビリティ編) こんにちは。Woven Payment Solution開発グループの楢崎と申します。 我々は、 Woven by Toyota で Toyota Woven City で利用される決済基盤のバックエンド開発に携わっており、 Ktor というKotlin製のWebフレームワークを用いて開発しています。 これらのバックエンドアプリケーションは、Woven Cityで利用される、KubernetesをベースにしたCity Platformという基盤で動作し、マイクロサービスを構成しています。 今回は、マイクロサービスアーキテクチャを構成する上でマイクロサービスのペインポイントと、それらを解消する上で必要不可欠となる、オブザーバビリティ(Observability)を向上させるためのtipsを、 我々が利用しているKtorというWebフレームワークと、マイクロサービスをホストするプラットフォームとしてKubernetesを例にいくつかご紹介したいと思います。 またKubernatesと合わせて、いわゆる「クラウドネイティブ」な技術スタックも合わせてご紹介したいと思います。ログ収集ツールとして Loki , メトリクス収集ツールとして Prometheus 、可視化ツールとして Grafana を今回は用いています。 実際にJavaやKotlinを使ってマイクロサービスを開発している方々はもちろん、プログラミング言語を問わず、マイクロサービスやKubernetesをこれから導入しようとしている開発者の皆さんの参考になれば幸いです。 この手順を再現する方法とサンプルコードはまとめて 記事の最後 に記載していますので、お時間ある方は是非手を動かしてみてください! 最初に: マイクロサービスのつらみ 一般的に、マイクロサービス化することによって、モノリシックなアプリケーションの諸問題は解消することができますが、一方でアプリケーションの複雑性が増して、問題が発生した際の切り分けが非常に難しくなってしまいます。今回は以下の3つの問題を考えてみます。 ペインポイントその1: エラーがどのタイミングでどのサービスが起因となって起こったのかよくわからない ペインポイントその2: 依存性のあるサービスの稼働状況を常に考慮しないといけない ペインポイントその3: リソース起因のパフォーマンス低下切り分けが難しい オブザーバビリティを向上させることによって、それらの問題をどのように解消できるのか、今回はKtorを例に、 わずか3つのプラグインの導入と、数行のコードの追加 でペインポイントごとに解決策を実装してみてみたいと思います。 今回導入する3つのKtorプラグイン 施策1. CallIdの導入 今回以下のような、マイクロサービスでよくあるクラスタ内でAPIを呼び出すような2つのサービスを作成し、どのようにログが見えるか見てみたいと思います。 sequenceDiagram participant User as ユーザー(クラスタ外) participant A as Frontendサービス participant B as Backendサービス User->>A: /call リクエスト Note over User,A: クラスタ外からのリクエスト A->>B: / リクエスト Note over A,B: Frontendでの結果をBackendに渡す B-->>A: / レスポンス Note over B,A: Backendで処理した結果を返す A->>User: /call レスポンス ログは標準出力へ出力し、Kubernetes上に別にデプロイしたログ収集ツール(今回はLoki)で収集することを前提とします。 サービスをそれぞれ、呼び出し元(frontend)と呼び出し先(backend)とします。 監視する時にそれぞれのサーバで起こっていることは、ロギングプラットフォームなどでポッド名などを指定して見ることができるかもしれませんが、サーバをまたいだリクエストは、お互い関連させて見ることはできません。 特にリクエスト数が増大した場合、時系列でログを表示するだけでは、どのアプリケーションログ同士が関連しているか切り分けるのは非常に難しくなってしまいます。 大量にリクエストが来ると、どのリクエストとレスポンスが関連があるかわからない... 別のサーバ上で起こった因果関係のあるイベントをネットワーク越しに関連させる仕組みを分散トレーシング(distributed tracing)と言います。 一般的には、Istio等サービスメッシュを利用すればZipkinやJaegerで関連しているリクエストの可視化は可能で、直感的にどこでエラーが発生したか理解することはできます。 一方で、ログからキーワードで検索するなどアプリケーションログを中心としたトラブルシュートの際の使い勝手はあまりいいとはいえません。 そこで、Ktorの CallId という機能を利用します。これでロギングプラットフォームで特定のCallIdのログを、キーワードとして検索して見ることができます。 またネットワークレイヤーの設定が不要なので、サービスメッシュなどを導入しなくてもアプリケーションエンジニア側で完結し融通が効きます。 実際にアプリケーションを動かしてGrafanaでログを確認してみましょう。 今回はフロントエンド、バックエンド共に同じコンテナイメージを用意するので、生成するプロジェクトは一つでOKです。 こちらの手順 にそってソースコードをテンプレートから生成します。 dependencies { implementation("io.ktor:ktor-server-call-logging-jvm:$ktor_version") implementation("io.ktor:ktor-server-call-id-jvm:$ktor_version") implementation("io.ktor:ktor-server-core-jvm:$ktor_version") 上記のような必要なライブラリが参照されています。 生成されたコードのうち、ログに関する部分を以下のように修正します。 (以下に各行が何を表すか、コメントとして付記しています、修正する必要はありません。) fun Application.configureMonitoring() { install(CallLogging) { level = Level.INFO filter { call -> call.request.path().startsWith("/") } // ログを出力する条件を指定できる callIdMdc("call-id") // これを設定しておくことで、logback.xmlの %X{call-id} の部分に値を埋め込む事が可能 } install(CallId) { header(HttpHeaders.XRequestId) //どのヘッダーにIDの値を格納するか verify { callId: String -> callId.isNotEmpty() //値が存在するか検証する } + generate { + UUID.randomUUID().toString() // なかったら値を生成して埋め込む + } } HTTPクライアントの実装では、リクエストのヘッダーに値を入れて同じCallIdが伝搬するように設定しておくと良いでしょう。 以下のコードをそれぞれ追加して、実際にCallIdがサーバ間の通信で伝搬するか確認してみます。 dependencies { ... + implementation("io.ktor:ktor-client-core:$ktor_version") + implementation("io.ktor:ktor-client-cio:$ktor_version") ... } routing { + get("/call") { + application.log.info("Application is called") + val client = HttpClient(CIO) { +   defaultRequest { + header(HttpHeaders.XRequestId, MDC.get("call-id")) + } + } + val response: HttpResponse = client.get("http://backend:8000/") + call.respond(HttpStatusCode.OK, response.bodyAsText()) + } サンプルコードを 以下 を参考にビルド、デプロイできる様になったら実際に以下のコマンドを実行してAPIを呼んでみてください。 curl -v localhost:8000/ curl -v -H "X-Request-Id: $(uuidgen)" localhost:8000/call サーバー間でCallIdが伝播して検索キーワードとして検索できる様になった ヘッダーに値を入れなくても、ログ上で、CallIdの値が追加されたかと思います。 またこちらのコマンドで生成されたUUIDの値を検索すると、一連の複数のサーバ上でのイベントを関連付けることができている事がわかります。。 施策2. Liveness Probe、Readiness Probeの設定 Kubernetesのコントロールプレーンにアプリケーションの死活状況を伝える仕組みとして、liveness probeとreadiness probeという仕組みがあります。 それぞれ何を表すのかは、 こちらのGoogleの記事 が参考になりますが Liveness Probe: コンテナアプリケーション単体での死活状態 Readiness Probe: 依存関係のあるサービスへを含めたアプリケーションが稼働可能な状態 をそれぞれAPI経由で取得できるようにしたものを言います。 これらを設定することによって、起動に失敗したコンテナを効率的にリサイクルできたり、起動に失敗したコンテナにアクセスしないよう、トラフィックを制御できます。 Ktorでこれらを実装してみます。ここでは、特にライブラリは使用しません。 実装の方針としては、liveness probeは自分自身の死活状況をKubernetesに伝えるためなので、リクエストに対してOKを返すだけで大丈夫です。 Readiness probeの方に、依存しているサービスや接続しているデータベースなどにpingを送ります。 また期待できる時間までにレスポンスが得られなかった自体に備えて、リクエストタイムアウトもここで設定しておきましょう。 routing { ... get("/livez") { call.respond("OK") // Web serverが起動しているかどうかだけ伝えられればいいので、200をかえせばOK } get("/readyz") { // DBやその他の依存サービスに応じてpingを送る実装をアプリケーションの用途に応じて記述  // SQL ClientやHTTP Clientにはリクエストタイムアウトが設定できるので、期待した時間内に接続できるか記述する call.respond("OK") } } これらのAPIのエンドポイントが存在することをKubernetesのコントロールプレーンに伝える必要があります。 Deploymentの定義に以下を追記します。 これらには、リクエストを処理可能になるまでの時間も設定できるので、初回起動に時間がかかる場合でも想定する経過時間を入れておけば誤検知しない様にできます。 ... livenessProbe: httpGet: path: /livez port: 8080 readinessProbe: httpGet: path: /readyz port: 8080 initialDelaySeconds: 15 # コンテナが起動して15秒後にreadiness probeに聞きに来る、defaultは0 periodSeconds: 20 # 20秒に一回 timeoutSeconds: 5 # 5秒以内に結果が帰ってくることを期待 successThreshold: 1 # 1回成功すれば起動成功と判定 failureThreshold: 3 # 3回連続失敗すればpodが再起動される ... 以上で設定は完了です。エンドポイント内にsleepなどを入れたり、各種パラメータを変えて振る舞いを確認してみてください。 また、今回は言及までにとどめますが、異常を検知した場合、Prometheusの Alertmanager などを利用して、通知する仕組みを構築しておきましょう。 施策3. Micrometerの設定 上記の2つを導入する事によって、かなりオブザーバビリティは向上したかと思います。またKubernetesではPod, Nodeレベルで監視もできると思いますが、アプリケーションのランタイムレベルの監視が不十分です。 一般的にKotlinのアプリケーションはJVM上で動作していて、JVM上のCPUやメモリ等の使用量やガベージコレクションの挙動を監視することによって、ランタイムを外形監視することができます。 それによって、意図しないランタイム起因のパフォーマンスの低下などを捉える事ができます。 では、マイクロサービスアーキテクチャではどのように導入するのが良いでしょうか? モノリスであれば、動作させるサーバにエージェントを入れることで比較的シンプルに導入できるはずです。一方でコンテナが生成、消滅を繰り返すKubernetesでエージェントの導入はあまり現実的ではありません。 Ktorは Micrometer というJava界隈ではデファクトなメトリクス取得の仕組みを、 Prometheusで収集できるプラグイン があります。 冒頭で説明した、プロジェクトをテンプレートから作成する際に以下のパッケージとソースコードがプロジェクトに追加されます。 implementation("io.ktor:ktor-server-metrics-micrometer-jvm:$ktor_version") implementation("io.micrometer:micrometer-registry-prometheus:$prometeus_version") val appMicrometerRegistry = PrometheusMeterRegistry(PrometheusConfig.DEFAULT) install(MicrometerMetrics) { registry = appMicrometerRegistry } routing { get("/metrics-micrometer") { call.respond(appMicrometerRegistry.scrape()) } } これらをKubernetesの設定ファイル上で提示すれば、勝手にPrometheusがエンドポイントを叩いてデータを集積してくれます。 kind: Service metadata: name: backend namespace: sample + annotations: + prometheus.io/scrape: 'true' + prometheus.io/path: '/metrics-micrometer' + prometheus.io/port: '8080' 更に マーケットプレイスに公開されているGrafanaにダッシュボード を追加する事によって非常に簡単にJVMのパフォーマンスを可視化することができ、アプリケーションの透明性を上げることができます。 マーケットプレイスからIDをコピペして持ってくるだけで登録可能 メモリ、CPU、ガベコレなどがpod単位で表示する事が可能 またこれらのメトリクスからアプリケーションが常時どれくらいのCPUやメモリを利用するのかを監視して、コンテナのCPUリソースを設定することによってKubernetesクラスタ全体のリソース使用の効率化にも繋がります。 (これらのリソースの設定は、アプリケーションを正しくスケールアウトさせるためにも必要となってきます) resources: requests: memory: "512Mi" cpu: "500m" limits: memory: "512Mi" cpu: "750m" 最後に Ktorというwebフレームワークには、プラグインベースで、既存のアプリケーションの動作を大きく変更することなく、非機能要件を向上させる事がおわかりいただけたと思います。 複雑性が増したシステムでは、一箇所でも死角を作ってしまうと、バグの原因を検証するために立てた仮説が、検証できず迷宮入りしてしまいます。 どんなアーキテクチャであっても不具合が起こった時に備えて常に死角を減らす努力をすることが大事です。 今回取り上げた内容で、マイクロサービスアプリケーションのWebフレームワークのオブザーバビリティの機能に関してご紹介できたかと思います。 もし今後マイクロサービスを採用したいとお考えの方で、フレームワークの選定に迷われている方は、これらの機能があるかどうかも技術選定のポイントとして加えたいですね。 他にもマイクロサービスを構成し、円滑に運用する上でのベストプラクティスとしてGitOpsの実践, サービス間の認証認可、負荷分散などが必要になってきますが、それはまた別の機会にご紹介できたらと思います。 最後に当社では 様々なポジション で採用していますので、ご興味あればまずはカジュアル面談からどうぞ。 (参考)環境設定とサンプルコード 上記の解説をお手元で再現するにあたって、Javaの実行環境と、Kubernetesを有効化したDocker Desktop、 Helm が動作することを前提としています。 これらはMac / Linuxで動作を確認しております。(Windowsをお使いの方はWSL2をご利用ください。) Kubernetesがお手元の端末の場合を想定しています。クラウド上にある場合は適宜読み替えてください。 この記事では、ログ収集ツールとして Loki , メトリクス収集ツールとして Prometheus 、可視化ツールとして Grafana を利用しています。 ソースはテンプレートからゼロから作成し、 Jib というツールを用いてビルドタスクを実行することでDockerイメージを作成することとします。 以下の例では、Kotlin Script(.kts)のGradleでビルドタスクを実行するものとします。 またコンテナをクラスタにデプロイするための Skaffold というツールもインストールしておくと、自動でDocker tagの設定からKubernetesへのデプロイを実行できます。 helm repo add grafana https://grafana.github.io/helm-charts helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update helm install prometheus prometheus-community/prometheus -n prometheus --create-namespace helm install loki grafana/loki-stack -n grafana --create-namespace helm install grafana grafana/grafana -n grafana export POD_NAME=$(kubectl get pods --namespace grafana -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=grafana" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace grafana port-forward $POD_NAME 3000 # 上記コマンドは実行後閉じないように別のターミナルを開いて kubectl get secret --namespace grafana grafana -o jsonpath="{.data.admin-password}" | base64 --decode #| pbcopy # Macをお使いの方はこちらをコメントアウトするとクリップボードにパスワードがコピーされます これでブラウザから http://localhost:3000 のGrafanaにアクセスして、ユーザID: admin , パスワードは最後のコマンドの結果を入力してログインし、 データソースをそれぞれ設定します。 Loki: http://loki:3100 Prometheus: http://prometheus-server.prometheus.svc.cluster.local これで監視ツールの方は完了です。 コードの方はInetelliJで新規のKtorのアプリケーションをテンプレートから新規作成します。IntelliJから以下を選びます。 VS Codeなどをお使いの方は こちらのサイト からダウンロード可能です。 今回はフロントエンド、バックエンド共に同じコンテナイメージを用意するので、生成するプロジェクトは一つでOKです。 以下のDockerでビルドするためのJibの設定をいれてJibのGradleタスク ./gradlew jibDockerBuild でビルドできることを確認してください。 plugins { application kotlin("jvm") version "1.8.21" id("io.ktor.plugin") version "2.3.1" + id("com.google.cloud.tools.jib") version "3.3.1" } ... + jib { + from { + platforms { + platform { + architecture = "amd64" + os = "linux" + } + } + } + to { + image = "sample-jib-image" + tags = setOf("alpha") + } + container { + jvmFlags = listOf("-Xms512m", "-Xmx512m") + mainClass = "com.example.ApplicationKt" + ports = listOf("80", "8080") + } +} 今回追加したログを注視できるよう、Logbackのログレベルを変更しておきましょう。また監視用に追加したエンドポイントはノイズになってしまうので、表示されないようにしてしまいます。 - <root level="trace"> + <root level="info"> install(CallLogging) { level = Level.INFO - filter { call -> call.request.path().startsWith("/") } + filter { call -> !arrayOf("/livez", "/readyz", "/metrics-micrometer") + .any { it.equals(call.request.path(), ignoreCase = true) }} callIdMdc("call-id") } ここまでソースに追記したら、以下のコマンドでコンテナイメージがKubernetes上にデプロイされてアプリケーションを実行されます。Grafana上にログやメトリクスが流れてくるか確認します。 services.yaml ファイルは少々長いので一番最後に記載しております。 ./gradlew jibDockerBuild && kubectl apply -f services.yaml # Buildするたびにdocker tagを修正する # Skaffoldをインストールしている方は以下のコマンドで skaffold init # yamlファイルが生成される skaffold run # 一回だけビルドデプロイ作業を実行 skaffold dev # ソースコード修正するたびに継続的にビルド、デプロイ作業が走る SkaffoldファイルにportForwardを記述しておくと自動でlocalhost:8000にアクセスできるようになって便利です apiVersion: skaffold/v4beta5 kind: Config metadata: name: observability build: artifacts: - image: sample-jib-image - buildpacks: # ビルドが遅いので消す - builder: gcr.io/buildpacks/builder:v1 + jib: {} # JAVA_HOMEに正しいPATHが入っていないと実行エラーになる可能性あり manifests: rawYaml: - service.yaml +portForward: + - resourceType: service + resourceName: frontend + namespace: sample + port: 8000 + localPort: 8000 apiVersion: v1 kind: Namespace metadata: name: sample --- apiVersion: apps/v1 kind: Deployment metadata: name: backend-deployment namespace: sample spec: replicas: 2 selector: matchLabels: app: backend template: metadata: labels: app: backend spec: containers: - name: backend image: sample-jib-image:alpha imagePullPolicy: IfNotPresent ports: - containerPort: 8080 # Liveness probe, readiness probeを実装するまでコメントアウトしておいてください # livenessProbe: # httpGet: # path: /livez # port: 8080 # initialDelaySeconds: 15 # periodSeconds: 20 # timeoutSeconds: 5 # successThreshold: 1 # failureThreshold: 3 # readinessProbe: # httpGet: # path: /readyz # port: 8080 resources: requests: memory: "512Mi" cpu: "500m" limits: memory: "512Mi" cpu: "750m" --- apiVersion: v1 kind: Service metadata: name: backend namespace: sample annotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics-micrometer' prometheus.io/port: '8080' spec: selector: app: backend ports: - protocol: TCP port: 8000 targetPort: 8080 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: frontend-deployment namespace: sample spec: replicas: 2 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: containers: - name: frontend image: sample-jib-image:alpha imagePullPolicy: IfNotPresent ports: - containerPort: 8080 # Liveness probe, readiness probeを実装するまでコメントアウトしておいてください # livenessProbe: # httpGet: # path: /livez # port: 8080 # initialDelaySeconds: 15 # periodSeconds: 20 # timeoutSeconds: 5 # successThreshold: 1 # failureThreshold: 3 # readinessProbe: # httpGet: # path: /readyz # port: 8080 resources: requests: memory: "512Mi" cpu: "500m" limits: memory: "512Mi" cpu: "750m" --- apiVersion: v1 kind: Service metadata: name: frontend namespace: sample annotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics-micrometer' prometheus.io/port: '8080' spec: selector: app: frontend ports: - protocol: TCP port: 8000 targetPort: 8080 type: LoadBalancer ここまでご覧いただきありがとうございました。以下のコマンドで今回作成したリソースを消しておきましょう。 skaffold delete docker rmi $(docker images | grep 'sample-jib-image') # kubectl delete all --all -n sample # skaffoldを実行していない場合 helm uninstall grafana -n grafana helm uninstall loki -n grafana helm uninstall prometheus -n prometheus
アバター
Introduction Hello. I am Koike, a data engineer in the Analytics Group. Slack is incredibly convenient, isn’t it? You might be wondering, “What’s this guy talking about all of a sudden?” But don't worry. It’s undeniable that Slack is a very useful tool. Among its features, I believe that emoji reactions are the most useful by far. But first, do you know what an emoji reaction is? Let me explain it in a bit more detail. Take a look at this image: ![Slack emoji reaction](/assets/blog/authors/charlie/slack_emoji_reaction.png =660x) This is something I posted in my personal times-channel. If you look at the area circled in white, you’ll notice small bubble-like icons. These are what we call emoji reactions. Next to the "🥲" emoji, there’s a "1," which meaning that one person reacted with that "🥲" emoji. The more reactions there are, the happier it makes me! There is also another emoji called donmai (meaning “don’t worry!”) This isn’t a default Slack emoji, but rather a custom emoji ^1 added by a user. This time, I’ll be adding custom emojis with the help of ChatGPT. Background Let me explain why I wanted to add custom emojis in the first place. Our Analysis Group is split across three locations: Tokyo, Nagoya, and Osaka. This means some of us are working fully remotely at all times. Since offline communication does happen occasionally in each location, information gaps may be even more pronounced compared to a fully remote setup. To help bridge this gap, I thought we should focus more on using the text communication tool that is Slack. And that’s how I came up with the idea of adding custom emojis as a solution. Creating custom Slack emojis Now let's dive into adding custom emojis. Here’s how I plan to proceed: Count the number of times each emoji has been used in reactions Group the emojis collected in step 1 by the emotions they represent Use ChatGPT to generate words based on the groups created in step 2. Select the words generated in step 3, create images using an emoji creation tool, and register them on Slack. Counting the number of times emojis are used When it comes to adding custom emojis, it’s hard to know what exactly to add right away. So, I decided to start by researching what kinds of emojis are already being used regularly. Specifically, I’ll narrow down the channels and time period, and then examine the emoji reactions and how often they were used in posts to identify trends. First, I created a Slack app, set the permissions as shown below (the permissions might be a bit excessive since I added whatever seemed necessary), and issued a token. Next, I called the app from Python and performed the aggregation as follows: import datetime import pprint import time import yaml from slack_sdk.web import WebClient def main(): get_and_save_messages() reaction_counts = get_reaction_counts() pprint.pprint(sorted(reaction_counts.items(), key=lambda x: x[1], reverse=True)) def get_and_save_messages(): SLACK_API_TOKEN = "SLACK_API_TOKEN" client = WebClient(token=SLACK_API_TOKEN) target_channel_id_to_name = { "id0": "#name0", "id1": "#name1", "id2": "#name2", "id3": "#name3", "id4": "#name4", } unix_times = get_unix_times() messages = get_messages_in_channels(client, list(target_channel_id_to_name.keys()), unix_times) with open("messages.yaml", "w") as f: yaml.dump(messages, f, allow_unicode=True) def get_messages_in_channels(client, channel_ids, unix_times): merged_messages = [] for channel_id in channel_ids: for unix_time_pair in unix_times: merged_messages += client.conversations_history( channel=channel_id, include_all_metadata=False, latest=str(unix_time_pair[0]), limit=100000, oldest=str(unix_time_pair[1]) )["messages"] return merged_messages def get_unix_times(): unix_times = [] today = datetime.date.today() start = 1 end = 15 for _ in range(24): start_date = today - datetime.timedelta(days=start) end_date = today - datetime.timedelta(days=end) start_unixtime = int(time.mktime(start_date.timetuple())) end_unixtime = int(time.mktime(end_date.timetuple())) unix_times.append((start_unixtime, end_unixtime)) start = end + 1 end = start + 14 return unix_times def get_reaction_counts(): with open("messages.yaml", "r") as f: messages = yaml.safe_load(f) reaction_counts = dict() for message in messages: if "reactions" in message: for reaction in message["reactions"]: reaction_counts[reaction["name"]] = reaction_counts.get(reaction["name"], 0) + reaction["count"] return reaction_counts if __name__ == "__main__": main() Let me briefly explain the source code. In main() , functions for each process are called. get_save_and_messages() is a function that retrieves messages from Slack and saves them to a file. Note that SLACK_API_TOKEN and target_channel_id_to_name are hidden. If there are too many messages, they may not be retrieved all at once. To handle this, get_unix_times() divides the period into smaller parts and returns a list, allowing the messages to be retrieved in smaller batches. Once the messages are retrieved, the function get_reaction_counts() counts the number of emoji reactions. After that, the results are sorted by frequency and displayed. Here is an example of the execution results: For example, ('man-gesturing-ok', 248) means that the man-gesturing-ok emoji was used 248 times as a reaction. The results include not only default emojis, but also custom ones and even emojis created by external users [^2]. ('man-gesturing-ok', 248), ('eyes', 248), ('arigatougozai', 199), ('understood', 64), ('+1', 49), ('thinking_face', 43), ('yorosikuonegaisimasu', 26), ('arigatougozaimasu', 17), ('tada', 15), ('nice', 14), ('arigato', 13), ('do_ne', 13), ('man-gesturing-no', 11), ('scream', 10), ('kakuninshimasu', 10), ('acknowledged2', 9), ('woman-bowing', 9), ('sob', 8), ('ok', 8), ('faito', 8), ('kami_bl', 7), ('done_hiragana', 7), ('desune', 7), ('naruhodo', 7), ('ok_hand', 7), ('sugoi', 7), ('tasukaru', 7), ('pray_mochimochi', 7), ('done-payment', 6), ('hai-', 6), ('nanto', 6), ('yokata', 6), ('mumondai', 6), ('tashi-crab', 5), ('muscle', 5), ('oh', 5), ('sasuga2', 5), ('uoooo', 5), Emoji grouping The resolution of the collected data is still low, so I will group the emojis by type of emotional expression to better understand their characteristics. While, I came across emotion classifications developed by psychologists, I found them too detailed and not well suited for this purpose. So I came up with my own method of grouping. I propose that the structure could be divided into two main categories: actions for reporting, communicating, and consulting, and actions for expressing emotions. These can then be further subdivided for clarity. Here is the breakdown: - Reporting, Communicating, and Consulting - Reporting - Checking - Requesting - Emotional Expressions - Empathy - Praise - Gratitude - Support - Comfort - Others Let's apply this classification to the results from the earlier aggregation. ![Emoji grouping](/assets/blog/authors/charlie/grouping.png =170x) If you've looked closely, you may have noticed that for most of the emojis that didn’t clearly fit into any category, I sorted them based on intuition... 🤫 However, I believe the groupings I've come up with feel reasonably appropriate. **Emoji generation using ChatGPT ** After grouping the emojis, the next step is to generate custom emojis for each emotional category. As a method, I’ll be using the popular ChatGPT. I used the GPT-4 model without plugins or web browsing features. Here’s the prompt I used: I want to create custom emojis for Slack. Please provide 10 suggestions for each of the following categories: Also, please try to keep them between 4 to 6 characters, using either hiragana, katakana, or kanji. - Show empathy for the person who sent the message (e.g. :indeed:, :That's true:) - Something to compliment the person who sent you the message (e.g. :Amazing:, :God:, :Great:) - Something to show appreciation to the person who sent you a message (e.g. :Thanks:,:Appreciated:) - Something to encourage the person who sent you the message (e.g.: :Fight:,:Woohoo:) - Something to comfort the person who sent you the message (e.g. :Oh:,:Wow:) Here are the results: Custom emoji suggestions. I tried to keep each one within 4-6 characters using hiragana, katakana, or kanji. Something that shows empathy for the person who sent you the message: :I know: (wakaru yo) :I agree: (doukan) :Makes sense: (nattoku) :Yep: (un un) :That's it: (sore sore) :I want to say: (iitai) :Empathy: (kyoukan) :I feel you: (kanjiru) :You're right: (gomottomo) :Thoughtfulness: (omoiyari) Something to compliment the person who sent you the message: :Amazing: (sugoi ne) :Genius: (tensai) :Incredible: (sugoi) :Cool: (kakkoii) :Lovely: (suteki) :Beautiful: (utsukushii) :Perfect: (kanpeki) :I respect you: (sonkei) :Praise: (shousan) :You’re shining: (kagayaiteru) Something to show appreciation to the person who sent you the message: :Thank you: (arigatou) :Gratitude: (kansha) :I’m grateful: (kansha shiteru) :Thanks: (arigatou ne) :Thanks a lot: (arigatou-) :You saved me: (tasukatta) :Grateful feelings: (kansha no kimochi) :Repay your kindness: (ongaeshi) :Appreciation: (orei) :I’m moved: (kangeki) Something to encourage the person who sent you the message: :Do your best: (ganbatte) :Fight: (faito) :Hang in there: (ganbare) :I’m cheering for you: (ouen suru) :I’ll be your strength: (chikara ni naru) :Let's do our best together: (issho ni ganbarou) :Let’s keep going: (ganbarimashou) :Sending you cheers: (eeru wo okuru) :Wishing you success: (seikou wo Inoru) :Be confident: (jishin wo motte) Something to comfort the person who sent you the message: :It’s okay: (daijoubu) :Hang in there: (ganbare) :Don’t worry about it: (ki ni shinaide) :You did your best: (ganbatta ne) :It's tough: (tsurai ne) :That's tough: (tsurai yo ne) :Tomorrow is a new day: (ashita wa atarashii hi) :Don't feel down: (ochikomanaide) :Stay positive: (maemuki ni) :You’re doing great: (ganbatteru ne) Some of these may feel unnatural for use on Slack, but overall, they seem pretty good. Emoji selection and Slack registration Next, I'll select the most useful ones from the emojis generated by ChatGPT, while removing any that are already registered in the Slack workspace. - Empathy - :I understand: - :I agree: - :You have a point: - Praise - :That’s amazing: - Gratitude - :I am touched: - Support - Comfort - :Don’t worry: This is what it turned out like. Finally, I’ll use an emoji creation tool [^3] to turn these into images and register them on Slack. Ah, what a great view! Conclusion What do you think? While this may not instantly enhance communication on Slack, taking the initiative to actively use the new emojis and encourage their adoption across the organization will be key. This time, we’ve added custom emojis spanning various emotional expression categories, but focusing on creating emojis to fill specific gaps could also be effective. I encourage you to try them out in your organization’s Slack space! [^2]: ¥0 – SmartHR Store [^3]: Emoji Generator
アバター
Introduction Konnichiwa! I am Felix, and I develop iOS applications at KINTO Technologies. This time, I would like to share my experience at iOSDC, held from August 22nd to 24th (Thursday to Saturday). Continuing from my previous post about trySwift , this was my second time attending an iOS conference. This time, we participated as a sponsor and hosted our own booth! KINTO Booth Our booth was decked out in KINTO blue, and as you can see, even the happi coats (traditional Japanese straight-sleeved coats) matched the theme! We hosted a coding challenge where participants had to read through some real project code and stamp the corresponding question numbers onto the code. For our booth giveaways, we distributed KINTO mascot stickers and cardboard cutouts shaped like iOS devices, which participants could decorate with stickers. Those who took part in the coding challenge received either an eco-bag or a multi-chain as a reward. It was a great opportunity to engage with attendees, hear their thoughts on KINTO Technologies, and gather valuable feedback on our projects. The conversations provided fresh insights into how our products are perceived and offered helpful guidance for future improvements. Other Booths There were many interesting and educational booths from other companies that caught my attention, and I'd like to highlight a few that stood out: Sansan Sansan’s booth was intriguing because it showcased their technology stack, allowing guests to react to various tools and frameworks. DeNA DeNA’s booth had a particularly fun activity where participants solved a crossword puzzle by both reading code and looking at a map. Bitkey Although you needed a test device and a MacBook to implement their beacon app, it was fun trying to develop the test app and find the person carrying the beacon. Glassfiber Glassfiber had a quiz that attracted a lot of people, offering both fun and educational content. Presentations I attended several sessions and would like to highlight the two that impressed me the most: Modern In-App Purchasing with StoreKit 2 First, I would like to talk about StoreKit 2 and its modern usage. As I haven't worked with StoreKit yet, I found the session insightful. It focused on the introduction, implementation, and testing of StoreKit 2, with a detailed comparison to StoreKit 1. The presentation covered key areas like simplifying asynchronous processing with async/await, streamlining receipt validation, and testing using sandbox environments, TestFlight, and StoreKitTest. This was highly informative for anyone looking to integrate in-app purchases into their apps. One interesting point I learned was that StoreKit doesn't directly support scenarios where a customer makes a payment but doesn't receive their purchased item, which was surprising to me. StoreKit 2によるモダンなアプリ内課金 This session explains StoreKit 2 implementation and testing, focusing on improvements and simplifying in-app purchases. How does GPS find your location? Another session I found interesting was about how mobile devices receive GPS signals and compute their location. It explained how Core Location uses a combination of GPS, Wi-Fi, and cellular signals for accurate positioning. While the basic GPS principle is triangulation with satellites, the talk also highlighted the complex engineering behind receiving weak signals over vast distances and how smartphones leverage network data for quick, precise results. I did not really know much related to this topic so I found it enlightening. GPSでどのようにして現在地が分かるのか The video explores how GPS and network data enable smartphones to quickly and accurately determine location. Conclusion Overall, iOSDC 2024 was a fantastic experience. It was not only an opportunity to learn from insightful sessions but also a chance to engage with the broader iOS developer community. Hosting the KINTO booth allowed me to interact with many talented individuals, hear their feedback, and showcase our work in a meaningful way. The presentations I attended, particularly those on StoreKit 2 and GPS technology, provided me with actionable insights that I can directly apply to KINTO’s ongoing projects. For example, the async/await improvements in StoreKit 2 will greatly streamline our in-app purchase implementation, making the process more efficient and user-friendly. Similarly, the advanced use of GPS and network triangulation will help us enhance location-based services in our apps, leading to more accurate and faster results for users. I’m excited to integrate these learnings into our development process and continue growing as a developer. Thank you for reading!
アバター
Introduction Hello. My name is Chris and I work in the Global Development Department at KINTO Technologies, where I work on front-end development. Today, I'd like to write about automation of business tasks, instead of front-end development. According to Slack's Productivity Report released last month, 77% of workers said that being able to automate routine tasks would greatly improve their productivity, saving them about 3.6 hours per week. So, it's important to automate your daily work as much as possible, so you can focus on what you really need to do and get more done. Sorry for the sudden change of subject, but I'd like to talk about our company's attendance rules, which were revised from July this year. Following the rules, we have a fixed maximum number of remote workdays every month. When to take those days is generally up to the individuals, as long as it's coordinated within the team and shared with other members of the division. In order to know the attendance schedule of the members, we have to report the schedule for the following week in advance. So in the Global Development Department, each member writes down their schedule for the following week on a monthly Excel spreadsheet stored on a cloud service called Box, and the leader then compiles their team's data and shares it with their manager on Slack. What's the Problem? Due to various departmental circumstances, the most efficient way to manage information is to use Excel to manage it all at once, but the problem is the flow of sharing information with managers. The Global Development Department has many members, and as a result, there are a fair number of leaders. It takes some time for all the leaders to take screenshots of Excel sheets and share them every week, and switching between tasks is a mental burden. In addition, some team members don't have assigned leaders, so their schedules aren't shared. The only way to check their availability is by looking directly at the Excel sheet. I thought it would be great to eliminate these two difficulties through leveraging my engineering skills with minimal time and efforts. So, I used the SDK provided by Box and Slack to automate the process of extracting information from Excel and uploading the schedule details to Slack! Development Environment This automation was achieved using Node.js with the following libraries. The actual code was created using Typescript, but this article will show the Javascript code. The following are also used for the implementation. dotenv When using the Box SDK or Slack SDK, I need to enter sensitive information such as tokens, so I want to make them environment variables using dotenv. https://github.com/motdotla/dotenv box-node-sdk SDK for Node.js provided by Box. https://github.com/box/box-node-sdk node-slack-sdk SDK for Node.js provided by Slack. https://github.com/slackapi/node-slack-sdk node-xlsx A library that converts information from Excel files to JSON. https://github.com/mgcrea/node-xlsx canvas-table A library that turns tables into images. https://github.com/el/canvas-table node-canvas A library on which canvas-table is based. https://github.com/Automattic/node-canvas/ Implementation Now I would like to explain the implementation step by step. Step1: Retrieve files from Box First, an application needs to be created from the Box admin console to enable the use of the Box SDKs. You can create a new one from the Box Developer Console (xxx.app.box.com/developers/console). After creation, a client ID will be issued for the app, but an access token must be issued separately. If your workspace is managed by your company, you will generally need to get approval from the company administrator on the administrator's screen. Once you have obtained the token, you should have been issued a service account ID from the app details screen. If you do not share the folder or file you want to access with this service account, you will get a 404 error when you try to get it from the SDK. Next, I'd like to move on to the code. First, install the Box SDK. yarn add box-node-sdk After that, you can write code like this to download the file to the specified location. A description of the download process can also be found in the official documentation . import BoxSDK from "box-node-sdk"; // Put the issued token here. const boxClient = BoxSDK.getBasicClient("token information"); // After that, use async/await for the process to retrieve information from the file. await downloadFile(); async function downloadFile() { return new Promise((resolve, reject) => { boxClient.files.getReadStream( // File ID "1234567", // Query parameter, for example, use if you want to get an older version of a file // https://developer.box.com/reference/get-files-id-content/ null, // Callback function function (error, stream) { if (error) { reject(error); } const output = fs.createWriteStream("output path of the file"); // Resolve Promise when finished writing output.on("finish", resolve); stream.pipe(output); } ); }) } Run the above code, and if the file exists and access permissions are correctly granted, the file should be exported to the specified path. Step2: Retrieve necessary information from a file Next, I want to retrieve the necessary information from the file downloaded from Box. Since it is an Excel file, I'll use node-xlsx to parse the Excel information. yarn add node-xlsx import xlsx from "node-xlsx"; const workSheets = xlsx.parse("path of the downloaded file"); console.log(workSheets) // [ // { // name: "sheet name", // data: [ // [], // [], // [] // ] // } // ] This will allow you to extract the information for each Excel sheet as a nested array, allowing you to process the data or delete any unnecessary data. Step3: Convert the information into an image Frankly, many of you may be wondering, "Why do we need this?" In fact, even when I first tried the automation, I had no idea. However, after obtaining the necessary information, I tried several ways to post table information to Slack in an easy-to-read format. For example, I tried creating a table using Markdown, but Slack does not support it, so when I actually tried, the layout was quite messed up. As a result, when I turned the table information into an image, the members' schedule information was neatly arranged. To this end, I used canvas-table for creating the table image. import { CanvasTable } from "canvas-table"; import { createCanvas } from "canvas"; // First, create a blank image (Canvas) const canvas = createCanvas(image width, image height); // Define information about the table const tableConfig = { // Column information columns: [ { title: "title" } ], // Information for each cell data: [ [ { value: "text", } ] ], // Optional information options: { borders: { column: { width: 1, color: "#555" }, row: { width: 1, color: "#555" }, table: { width: 1, color: "#555" }, }, title: { text: "title", }, } }; } const ct = new CanvasTable(canvas, tableConfig); await ct.generateTable(); await ct.renderToFile(fileName); This will generate the table image shown below. Step4: Post to Slack The next step is to post the image to Slack. Use @slack/web-api 's files.upload provided by Slack. yarn add @slack/web-api import fs from "fs"; import { WebClient } from "@slack/web-api"; // Set Slack OAuth Tokens const slackClient = new WebClient("token information"); const result = await slackClient.files.upload({ channels: "channel ID", initial_comment: "accompanying comment text", file: fs.createReadStream("file path") }); Upload is now complete! Step5: Autorun with GitHub Action With the steps above, the script is complete, but it still needs to be run locally. Now it would be perfect if this script could run automatically, wouldn't it? We use GitHub Actions a lot in our company, regardless of department, so we will use it again this time. First, create a yml file. name: Name of the workflow # Runs every Wednesday at 1:00 p.m. JST (listed at 4:00 a.m. UTC) on: schedule: - cron: '0 4 * * 3' jobs: build: runs-on: ubuntu-latest steps: # Checkout a repository - name: Checkout uses: actions/checkout@v3 # Set up the Node environment - name: Setup Node.js environment uses: actions/setup-node@v3 with: # Specify the appropriate Node version node-version: '18' # Install the library with Yarn - name: yarn install run: yarn install # Run the script (if the js file you want to run is index.js, as follows) - name: Run index file run: node index Now it will be executed automatically at the time specified by cron (although it may be slightly delayed). Step Extra: Change the font While this step is not necessary, I tried it as an extra step. As a group company of Toyota, we use Toyota's own font. I would like to apply it to the schedule table. I used the library called cavnas to create the image, but you can actually set the font as well. Since the Toyota Font is proprietary, a font file must be provided so that it can be referenced by the project. // Import registerFont import { registerFont, createCanvas } from "canvas"; // Always place before createCanvas registerFont('Font file path', { family: 'Font name' }); const canvas = createCanvas(canvasWidth, canvasHeight); // Specify the font to be used for the image const config = { columns: [ // ... ], data: [ [ // Define the cell information { value: "text", fontFamily: "font name", } ] ] options: { // Define the title title: { fontFamily: 'font name', text: "title", }, } }; } const ct = new CanvasTable(canvas, config); // ... If all goes well, you will have an image with the font applied like the one below. Conclusion There are still many areas to improve on what I created this time, so when I have time I would like to refactor it and add some nice features. If your company is also considering automating some business tasks, I hope this will be helpful!
アバター
Hi, my name is Ryomm and I’m developing the iOS version of an app called my route at KINTO Technologies (KTC). This year, KTC is proud to sponsor iOSDC Japan 2024 for the first time! The event will run for three days, from August 22 to 24, 2024. ▼ I also recommend you to check following blogs on this topic ▼ ✨ KINTO Technologies is a Gold Sponsor of iOSDC Japan 2024 ✨ We’re even setting up a booth✨ A lot of people, including from the Tech PR Group, the Creative Office, and the Mobile App Development Group, have come together to prepare, and I believe it’s shaping up to be a fantastic experience for everyone. Please come visit the KTC booth! We would be happy if you could remember the name KTC (KINTO Technologies)! We’ve put a lot of thought into this sponsorship and are excited to showcase the many swags we’ve created on this blog! Kumobii Paper Clip This item is included in the novelty box! It was an idea from Chimrin, combining practicality and style! Kumobii is the official mascot of KINTO. https://corp.kinto-jp.com/mascot/profile/ You can use the clip to mark your favorite page in a pamphlet or as a bookmark for technical books. Despite being made of paper, the clip is quite durable and easy to use! When you open the paper base...a token appears! Brochure manuscript There is also an advertisement for KTC in the brochure included in the novelty box! We designed it to reflect the essence of KTC, a company that provides technological support for Toyota's mobility services. Sticker & Sticker Mount Set This novelty item will be given to everyone who stops by our booth! I’m happy to say that my idea, Ryomm, was selected for this 🙌 At events like this, you often receive a lot of stickers at each booth, but what do you usually do with them? At try! Swift Tokyo 2024, I saw someone decorating their name tag with stickers like a collage, and I thought that was such a great idea! So i decided to copy it. For iOSDC, since the clear name tag holders don’t enable the use of folded paper, we prepared a backing paper specifically designed so you can create your own sticker collage! We also designed it to resemble an iPhone and made it about the same size as the 15 Pro, so it fits perfectly into your name tag case. It would be great if you could place it in your name tag case as a memento of the event. We are also distributing icon-style stickers based on the apps provided by KTC, so feel free to stick them on the backing paper and use them as well. Multi-card tool This is the first commemorative novelty item from the booth event! The iOS team held an ideathon, and K. Kane's idea was selected. When stored I'm sure many iOS engineers have, at some point, used a ruler to check if their view matches the design when implementing it...or maybe not. But in cases like those, this business card-sized tool has you covered! You can measure both length and angles anytime, anywhere. Tote Bag This tote bag features a cute print of Kumobii. It’s the second commemorative novelty item from the booth project. You can choose between the multi-card tool or the tote bag, so feel free to stop by the booth as many times as you like. Since you’ll collect lots of items at iOSDC, wouldn't it be handy to have a bag to carry everything? This idea came from uka! This bag is made of a durable material, and I highly recommend it! Leaflets distributed at the booth We’re also handing out leaflets introducing KTC at the booth. We want people to learn about the products that KTC develops, and that’s the message we’ve put into our leaflets. Booth Activities At the booth, we’ve prepared a game called "I Found the Code!" where you search for the part of the code that performs a given task. Each KTC product team has prepared its own set of questions, and the questions will change throughout the day, so be sure not to miss any! While we’re particular about the content of the questions, we’ve also paid close attention to small details to create a cohesive booth atmosphere. We borrowed wooden frame for displaying our posters and customized them in black using DIY stickers, designed the background to make the double-column code easier to read, and even tailored the question text to match the overall booth theme! We also took the opportunity to create a roll-up banner. So, why not try your hand at our booth activities while surrounded by KINTO Blue? Conclusion A huge thanks go to Sugimoto Aya san and Awano san from the Creative Team, who took on this massive deliverables and delivered the coolest designs! When we were creating the novelties, they brought handmade prototypes and made sure to communicate closely with us to help visualize the final product. Thanks to their efforts, we are ready to confidently welcome everyone to our booth. And now, it's finally happening--starting August 22nd! We’ll be waiting for you at our sponsor booth at ROHM Square! Please stop by and visit us!
アバター
Building Cloud-Native Microservices with Kotlin/Ktor (Observability Edition) Hello. My name is Narazaki from the Woven Payment Solution development group. At Woven by Toyota , we are involved in the backend development of the payment infrastructure used in Toyota Woven City , and we are developing it using Ktor , a web framework made with Kotlin. These backend applications run on City Platform, a Kubernetes-based infrastructure used in Woven City, forming the foundation of our microservices. In this article, I would like to introduce some pain points of microservices when configuring a microservices architecture and some tips for improving observability, which is essential to resolving those pain points, using as examples Ktor, a web framework that we use, and Kubernetes as a platform for hosting microservices. In addition to Kubernetes, I would like to introduce a so-called "cloud native" technology stack. This time, I will use Loki as a log collection tool , Prometheus as a metrics collection tool, and Grafana as a visualization tool . I hope this will be useful not only for those who are actually developing microservices using Java or Kotlin, but also for developers who are planning to introduce microservices and Kubernetes, regardless of the programming language they use. Instructions on how to replicate these steps, along with sample code, are provided at the end of this post . If you have time, please give it a try! First: The Challenges of Microservices Generally speaking, by adopting microservices, various problems of monolithic applications can be resolved, but on the other hand, the complexity of the application increases, making it very difficult to isolate problems when they occur. Here, we will consider three specific pain points. Pain Point 1: It is not clear when and which service caused the error. Pain Point 2: The operation status of dependent services must always be taken into consideration. Pain point 3: It is difficult to isolate resource-related performance degradation. By improving observability, we can tackle these challenges. In this post, I’ll show how we can implement solutions for each pain point using Ktor as an example. The approach involves introducing just three plugins and adding a few lines of code . The three Ktor plugins that we are introducing Solution 1. Introducing CallId In this solution, I will create two services that frequently call APIs within the same cluster, as is common in microservices. Let's see how the logs are captured in this environment. sequenceDiagram participant User as External user (outside the cluster) participant A as Frontend Service participant B as Backend Service User->>A: /Call request Note over User,A: Requests from outside the cluster A->>B: / Request Note over A,B: Pass the result from Frontend to Backend B-->>A: / Response Note over B,A: Return the result processed by Backend A->>User: /call response Logs will be output to standard output and collected by a log collection tool (Loki, in this case) deployed separately on Kubernetes. The services will be referred to as the caller (frontend) and the callee (backend). When monitoring, you may be able to see what is happening on each server by specifying the pod name, etc., using a logging platform, but requests across servers cannot be viewed in relation to each other. Especially as the number of requests increases, it becomes very difficult to isolate which application logs are related simply by displaying logs in chronological order. When a large number of requests come in, it becomes unclear which requests and responses are related... The mechanism that associates causally related events across servers over the network is called distributed tracing. In general, if you use a service mesh like Istio, you can visualize related requests with tools like Zipkin and Jaeger, making it intuitive to understand where errors occured. On the other hand, it is not very convenient to use when troubleshooting application logs, such as searching for keywords in the logs. This is where Ktor's CallId comes into play. With this feature, you can search and view specific logs by using CallId as a keyword on the logging platform. Also, since there is no need to configure the network layer, it is flexible and can be completed by the application engineer without having to introduce a service mesh or similar. Let's actually run the application and check the logs in Grafana. In this example, we will prepare the same container image for both the frontend and backend, so we only need to generate one project. Follow these steps to generate the source code from the template. dependencies { implementation("io.ktor:ktor-server-call-logging-jvm:$ktor_version") implementation("io.ktor:ktor-server-call-id-jvm:$ktor_version") implementation("io.ktor:ktor-server-core-jvm:$ktor_version") } The necessary libraries are referenced as shown above. The logging-related part of the generated code should be modified as follows. (Comments are included to explain each line; no further modifications are necessary.) fun Application.configureMonitoring() { install(CallLogging) { level = Level.INFO filter { call -> call.request.path().startsWith("/") } // Specify the conditions under which logs will be output callIdMdc("call-id") // By setting this, the value can be embedded in the %X{call-id} part of logback.xml } install(CallId) { header(HttpHeaders.XRequestId) // Specify which header will store the ID value verify { callId: String -> callId.isNotEmpty() // Verify if a value exists } + generate { + UUID.randomUUID().toString() // If not, generate and embed a new value + } } In the HTTP client implementation, it’s recommended to set the header with this value so that the same CallId propagates across requests. Add the following dependencies to verify that the CallId propagates correctly between servers. dependencies { ... + implementation("io.ktor:ktor-client-core:$ktor_version") + implementation("io.ktor:ktor-client-cio:$ktor_version") ... } routing { + get("/call") { + application.log.info("Application is called") + val client = HttpClient(CIO) { +   defaultRequest { + header(HttpHeaders.XRequestId, MDC.get("call-id")) + } + } + val response: HttpResponse = client.get("http://backend:8000/") + call.respond(HttpStatusCode.OK, response.bodyAsText()) + } Once you’re able to build and deploy using the sample code below , try running the following commands to make API calls: curl -v localhost:8000/ curl -v -H "X-Request-Id: $(uuidgen)" localhost:8000/call With this setup, the CallIds now propagates between servers, allowing it to be used as a searchable keyword Even if you don't enter a value in the header, the CallId value will be added to the log. Also, if you search for the UUID value generated by this command, you will be able to correlate events on multiple servers. Solution 2. Setting Up Liveness and Readiness Probes In Kubernetes, liveness and readiness probes are mechanisms that communicate the application’s health status to the control plane. You can refer to this Google article for more information on each. Liveness Probe: Reports the container’s own health status. Readiness Probe: Reports whether the application, including to dependent services, is ready for operation, accessible through APIs. By setting these, you can efficiently recycle containers that have failed to start, or control traffic so that containers that have failed to start are not accessed. Let's implement these with Ktor. Here, no libraries are needed. The implementation policy is that the liveness probe is to inform Kubernetes of its own aliveness status, so it's fine to just return OK to the request. The readiness probe will send pings to dependent services and connected databases. To handle cases where responses aren’t received in time, set a request timeout. routing { ... get("/livez") { call.respond("OK") // Simply returns a 200 status to indicate the web server is running } get("/readyz") { // Implement pings to the DB or other dependent services based on the application’s requirements  // You can set request timeouts for SQL Client or HTTP Client to ensure connection are made within the expected time call.respond("OK") } } You need to tell the Kubernetes control plane that these API endpoints exist. Add the following to the Deployment definition. This configuration also allows you to set the time needed for the application to be ready to process requests, which will prevent false detections even if the initial startup takes longer. ... livenessProbe: httpGet: path: /livez port: 8080 readinessProbe: httpGet: path: /readyz port: 8080 initialDelaySeconds: 15 # The readiness probe will start checking 15 seconds after the container starts; default is 0 periodSeconds: 20 # Runs every 20 seconds timeoutSeconds: 5 # Expected to return results within 5 seconds successThreshold: 1 # Considered successful after one success failureThreshold: 3 # If it fails three consecutive times, the pod will restart ... With this, the setup is complete. You can test the behavior by adding a sleep command within the endpoint or by adjusting these parameters. Also, although this is only a reference this time, we recommend building a system to notify you using Prometheus's Alertmanager or similar if an abnormality is detected. Solution 3. Configuring Micrometer By implementing the first two solutions, observability should be significantly improved. While Kubernetes allows monitoring at the Pod and Node levels, runtime-level monitoring within the application is still limited. Generally, Kotlin applications run on the JVM, allowing you to monitor runtime performance by tracking CPU and memory usage, as well as garbage collection behavior on the JMV. This helps detect unintended runtime-related performance degradation. So, how should we approach this in a microservices architecture? In a monolith, it should be relatively simple to implement by installing an agent on the server where it will run. On the other hand, in Kubernetes, where containers are repeatedly created and destroyed, installing an agent is not very practical. Ktor provides a plugin for Micrometer , the de facto standard in the Java ecosystem for collecting metrics, which can be integrated with Prometheus for monitoring. When creating a project from the template described above, the following packages and source code will be added to the project. implementation("io.ktor:ktor-server-metrics-micrometer-jvm:$ktor_version") implementation("io.micrometer:micrometer-registry-prometheus:$prometeus_version") val appMicrometerRegistry = PrometheusMeterRegistry(PrometheusConfig.DEFAULT) install(MicrometerMetrics) { registry = appMicrometerRegistry } routing { get("/metrics-micrometer") { call.respond(appMicrometerRegistry.scrape()) } } By specifying these in Kubernetes configuration files, Prometheus will automatically scrape the endpoints and collect the data. kind: Service metadata: name: backend namespace: sample + annotations: + prometheus.io/scrape: 'true' + prometheus.io/path: '/metrics-micrometer' + prometheus.io/port: '8080' Additionally, by adding a Grafana dashboard from the marketplace , you can easily visualize JVM performance metrics, improving the transparency of your application. You can simply copy and paste the dashboard ID from the marketplace to register it This setup allows you to display memory, CPU, garbage collection, and other metrics on a per-pod basis In addition, by monitoring how much CPU and memory an application is using at any given time from these metrics and setting the CPU resources for containers, you can improve the efficiency of resource usage across the Kubernetes cluster. (Setting these resources is also necessary to ensure proper scaling of the application.) resources: requests: memory: "512Mi" cpu: "500m" limits: memory: "512Mi" cpu: "750m" lastly I hope you have seen that Ktor is a plugin-based web framework that can improve non-functional requirements without significantly changing the behavior of existing applications. In complex systems, a single oversight can lead to untraceable issues, where hypotheses about bugs can’t be verified and debugging turns into a maze. Regardless of the architecture, it’s important to continuously reduce blind spots to prepare for potential issues. I hope that this article has provided you with an introduction to the observability features of web frameworks for microservice applications. If you are considering adopting microservices in the future and are unsure of which framework to choose, you should also consider whether these features are available when selecting a technology. There are also other best practices for building and smoothly operating microservices, such as implementing GitOps, managing inter-service authentication and authorization, and load balancing, which I hope to cover in a future post. Finally, we are hiring for a variety of positions . If you’re interested, feel free to start with a casual chat. (Reference) Environment Setup and Sample Code To replicate this setup in your own, you’ll need a Java runtime environment, Docker Desktop with Kubernetes enabled, and Helm . These have been tested on Mac/Linux. (Windows users, please use WSL2.) This article assumes Kubernetes is running locally. If it’s in the cloud, adjust accordingly. In this article, we used Loki for log collection, Prometheus for metrics collection, and Grafana for visualization. The source code is created from scratch using a template, and the Docker image is built using Jib as a Gradle build task. In the following example, we will run the build task in Gradle using Kotlin Script(.kts). We also recommend installing a tool called Skaffold to automate Docker tagging and Kubernetes deployment for your container cluster. helm repo add grafana https://grafana.github.io/helm-charts helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update helm install prometheus prometheus-community/prometheus -n prometheus --create-namespace helm install loki grafana/loki-stack -n grafana --create-namespace helm install grafana grafana/grafana -n grafana export POD_NAME=$(kubectl get pods --namespace grafana -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=grafana" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace grafana port-forward $POD_NAME 3000 # Open another terminal to keep this command running after execution. Kubectl get secret --namespace grafana grafana -o jsonpath="{.data.admin-password}" | base64 --decode # | pbcopy # For Mac users, uncomment this line to copy the password to the clipboard. Now, access Grafana in your browser at http://localhost:3000 . Use the user ID: admin and enter the password output from the last command to login. Configure each data source as follows: Loki: http://loki:3100 Prometheus: http://prometheus-server.prometheus.svc.cluster.local This completes the monitoring setup. For the code, create a new Ktor application from a template in InetelliJ. Select the following from IntelliJ. If you’re using VS Code, you can download it from this site . In this example, we will prepare the same container image for both the frontend and backend, so we only need to generate one project. Add the following Jib configuration for building with Docker. Then, confirm that you can build by running the Jib Gradle task ./gradlew jibDockerBuild . plugins { application kotlin("jvm") version "1.8.21" id("io.ktor.plugin") version "2.3.1" + id("com.google.cloud.tools.jib") version "3.3.1" } ... + jib { + from { + platforms { + platform { + architecture = "amd64" + os = "linux" + } + } + } + to { + image = "sample-jib-image" + tags = setOf("alpha") + } + container { + jvmFlags = listOf("-Xms512m", "-Xmx512m") + mainClass = "com.example.ApplicationKt" + ports = listOf("80", "8080") + } +} Let's change the log level of Logback so that we can keep an eye on the logs we added this time. Also, to avoid noise, we’ll hide the monitoring endpoints. - <root level="trace"> + <root level="info"> install(CallLogging) { level = Level.INFO - filter { call -> call.request.path().startsWith("/") } + filter { call -> !arrayOf("/livez", "/readyz", "/metrics-micrometer") + .any { it.equals(call.request.path(), ignoreCase = true) }} callIdMdc("call-id") } Once you have added this to the source, the container image will be deployed to Kubernetes with the following command and the application will be executed. Check Grafana to see if logs and metrics are being streamed correctly. Since the services.yaml file is a bit lengthy, it’s provided at the very end. ./gradlew jibDockerBuild && kubectl apply -f services.yaml # Update the Docker tag with each build # If you have Skaffold installed, you can use the following commands: skaffold init # Generates yaml files skaffold run # Builds and deploys the application once skaffold dev # Continuously builds and deploys each time you update the source code Including portForward in the Skaffold file makes it convenient to access the application at localhost:8000 automatically. apiVersion: skaffold/v4beta5 kind: Config metadata: name: observability build: artifacts: - image: sample-jib-image - buildpacks: # Remove this as it slows down the build - builder: gcr.io/buildpacks/builder:v1 + jib: {} # Make sure JAVA_HOME is set to the correct PATH to avoid execution errors. manifests: rawYaml: - service.yaml +portForward: + - resourceType: service + resourceName: frontend + namespace: sample + port: 8000 + localPort: 8000 apiVersion: v1 kind: Namespace metadata: name: sample --- apiVersion: apps/v1 kind: Deployment metadata: name: backend-deployment namespace: sample spec: replicas: 2 selector: matchLabels: app: backend template: metadata: labels: app: backend spec: containers: - name: backend image: sample-jib-image:alpha imagePullPolicy: IfNotPresent ports: - containerPort: 8080 # Comment out until implementing the liveness and readiness probes # livenessProbe: # httpGet: # path: /livez # port: 8080 # initialDelaySeconds: 15 # periodSeconds: 20 # timeoutSeconds: 5 # successThreshold: 1 # failureThreshold: 3 # readinessProbe: # httpGet: # path: /readyz # port: 8080 resources: requests: memory: "512Mi" cpu: "500m" limits: memory: "512Mi" cpu: "750m" --- apiVersion: v1 kind: Service metadata: name: backend namespace: sample annotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics-micrometer' prometheus.io/port: '8080' spec: selector: app: backend ports: - protocol: TCP port: 8000 targetPort: 8080 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: frontend-deployment namespace: sample spec: replicas: 2 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: containers: - name: frontend image: sample-jib-image:alpha imagePullPolicy: IfNotPresent ports: - containerPort: 8080 # Comment out until implementing the liveness and readiness probes # livenessProbe: # httpGet: # path: /livez # port: 8080 # initialDelaySeconds: 15 # periodSeconds: 20 # timeoutSeconds: 5 # successThreshold: 1 # failureThreshold: 3 # readinessProbe: # httpGet: # path: /readyz # port: 8080 resources: requests: memory: "512Mi" cpu: "500m" limits: memory: "512Mi" cpu: "750m" --- apiVersion: v1 kind: Service metadata: name: frontend namespace: sample annotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics-micrometer' prometheus.io/port: '8080' spec: selector: app: frontend ports: - protocol: TCP port: 8000 targetPort: 8080 type: LoadBalancer Thank you for following along this far. Let’s delete the resources created in this blog using the following commands. skaffold delete docker rmi $(docker images | grep 'sample-jib-image') # Kubectl delete all --all -n sample # If you didn’t use skaffold helm uninstall grafana -n grafana helm uninstall loki -n grafana helm uninstall prometheus -n prometheus
アバター
Introduction Hi, we're Yao, Bahng, and Lai from the Global Development Division. We're mobile app engineers, usually developing Global KINTO App . A few months ago, we investigated Kotlin Mltiplatform Mobile (KMM) as a preliminary activity for the future of Global KINTO App. See our previous article on this. The results of our previous investigation indicate that KMM is an excellent solution for rapid product development. Now that KMM has been revealed as a new approach compatible with Compose UI, we decided to investigate it further. Based on that investigation, this article discusses app development using Kotlin Multiplatform Mobile (KMM) and Compose Multiplatform. Before getting into the main topic, let's first clarify three points: What is KMP? What is KMM? What is the relationship between KMP and KMM? The answers are as follows: KMP: Kotlin Multiplatform, referring to the technology used to develop applications across multiple platforms using Kotlin, along with its entire ecosystem. KMM: Kotlin Multiplatform for Mobile. One of the primary use cases for KMP is code sharing between mobile platforms. In addition to KMP, several technologies specifically for mobile app development are collectively referred to as KMM. The graph below illustrates the relationship between KMP and KMM. Reference: -- JetBrains "Kotlin brand assets | Kotlin. (n.d.-c). Kotlin Help." , "Get started with Kotlin Multiplatform for mobile | Kotlin. (n.d.). Kotlin Help." Accessed June 1, 2023 Cross-Platform You may wonder about the advantages of cross-platform development, particularly when considering Kotlin Multiplatform Mobile (KMM) as cross-platform solutions. Here are the benefits: Cost-effective: Cross-platform development allows the use of a single codebase across multiple platforms, eliminating the need for separate platform development teams and reducing the cost of app development. Faster deployment: By leveraging a single codebase, developers can create and launch applications on multiple platforms simultaneously, significantly reducing development time and accelerating time to release. Simplified maintenance and updates: By using a single codebase, apps can be easily maintained and updated, allowing changes to be made once and propagated across all platforms. This streamlines the maintenance process and ensures that all users have access to the latest features. Consistent user experience: By using cross-platform development tools and frameworks, a consistent look and feel can be maintained across different platforms, providing a unified user experience. This can lead to improved user satisfaction and user retention. Shared resources and skills: Developers familiar with cross-platform tools and languages can create apps for multiple platforms. This allows for more efficient use of resources and maximizes the return on investment in developer skills and training. History of Cross-Platform Development Tools for Mobile In 2009, PhoneGap was created and later renamed Apache Cordova. In 2011, Xamarin was created by Mono and later acquired by Microsoft. In 2015, React Native was created by Facebook (Meta). In the mid-2010s, designer Frances Berriman and Google Chrome engineer Alex Russell coined the term "progressive web app (PWA)," and Google made several efforts to popularize it. In 2017, Flutter was created by Google. In 2021, KMM was created by JetBrains. This means that KMM is currently the most recent cross-platform solution available. Logo source: -- Apache "Artwork - Apache Cordova. (n.d.)." Accessed June 1, 2023 -- Microsoft "Conceptdev. (n.d.). Xamarin documentation - Xamarin. Microsoft Learn" Accessed June 1, 2023 -- Meta "Introduction · React native." Accessed June 1, 2023 -- Google "Progressive web apps. (n.d.). web.dev." Accessed June 1, 2023 -- Google "Flutter documentation. (n.d.)." Accessed June 1, 2023 -- JetBrains "Kotlin Multiplatform for Cross-Platform development" Accessed June 1, 2023 Why is KMM Different? Shared business logic: KMM reduces code duplication and maintains consistency between Android and iOS by allowing code related to business logic, networking, and data storage to be shared across platforms. True native UI: KMM allows the use of platform-specific tools and languages (e.g. XML for Android and SwiftUI or UIKit for iOS) for UI development, resulting in a more native look and feel compared to other cross-platform solutions. Performance: Kotlin code is compiled into native binaries for each platform, resulting in high-performance applications that are comparable to native development. Seamless integration: KMM can be integrated into existing projects, developers can adopt it incrementally and migrate sharing logic to Kotlin without having to completely rewrite their apps. Interoperability with native libraries: KMM seamlessly interoperates with both Android and iOS native libraries, facilitating the use of existing libraries and frameworks. Benefits of the Kotlin language: Kotlin is a modern and concise language that provides similar functionality to existing alternatives while reducing redundant code, with tool support from JetBrains. The above points are explained in detail below. (1) Shared Business Logic KMM is used when implementing the data, business, and presentation layers in new projects. Flexibility: KMM allows developers to determine the scope of code they want to share, offering a flexible implement balanced with platform-specific code as needed. Consistency assurance: While differences in UI can be easily detected in QA testing, inconsistencies between Android and iOS are difficult to detect in logic. By using KMM, the same code can be used, thus ensuring consistency. (2) Truly Native UI KMM supports native UI, uses native UI components, and follows platform-specific design patterns. Android: xml, Jetpack Compose, etc. iOS: UIKit, SwiftUI, etc. UI performance: KMM uses native UI components, and since the Kotlin code is compiled into native binaries for each platform, its performance is generally comparable to native apps. Easy platform updates: KMM makes it easy for developers to update new platform features and designs. Because it uses the native UI framework for each platform. (3) Performance No JavaScript bridge is required; no reliance on third-party libraries. Uses the system's default rendering engine, reducing resource consumption compared to other cross-platform solutions. Native code compilation: KMM compiles Kotlin code into native binaries for each platform. This native code compilation enhances app efficiency and overall performance. Android: Standard Kotlin/JVM iOS: Kotlin/Native Compiler (Objective-C) (4) Seamless Integration No need to bridge native modules or rewrite existing code. Phased adoption: KMM can be gradually introduced into existing native Android and iOS projects. This allows teams to share business logic, network, and data storage code across platforms in phases, reducing the risks associated with a complete technology switch. Multiple approaches to using KMM modules in iOS CocoaPods Gradle plugin and git submodules Framework Swift Package Manager (SPM): Starting with Kotlin 1.5.30, KMM modules are available in iOS projects using the Swift Package Manager. (5) Interoperability with Native Libraries Access to native APIs and libraries: KMM provides direct access to native APIs and libraries, facilitating easy integration with platform-specific functions and hardware components such as sensors and Bluetooth. Seamless integration with platform-specific code: KMM allows for writing platform-specific code as needed, which is useful when dealing with complex native libraries or accessing features not available through shared Kotlin code. Kotlin/Native: KMM uses Kotlin/Native for iOS. This allows seamless interoperability with Objective-C and Swift code. This means that existing iOS libraries and frameworks can be used without additional bridging or wrapping code. (6) Kotlin Language Benefits Language features: Modern, static typing, null safety, extension functions, data classes, SmartCast, interoperability with Java Tools and support: Kotlin provides exceptional support and first-class integration in Android Studio and IntelliJ IDEA. Industry adoption: Kotlin has seen rapid adoption since becoming the official programming language for Android development. Many backend developers also use Kotlin. What Kind of People are Using KMM? In fact, several companies have adopted Kotlin Multiplatform Mobile (KMM) for mobile app development. Here are some notable examples: Netflix: Netflix uses KMM in some of its internal tools to share code between Android and iOS apps. VMware: VMware uses KMM for cross-platform development of Workspace ONE Intelligent Hub app (employee management tool for Android and iOS). Yandex: Yandex, a Russian multinational technology company, has adopted KMM in several of its mobile apps, including Yandex Maps and Yandex Disk. Quizlet: Quizlet, an online learning platform, uses KMM to share code between Android and iOS apps, improving development efficiency. These companies represent diverse industries, and their adoption of KMM demonstrates the flexibility and usefulness of technology in different contexts. As KMM becomes more popular, it's likely that even more companies will adopt KMM to meet their cross-platform mobile development needs. Reference: -- JetBrains "Case studies. (n.d.). Kotlin Multiplatform." Accessed June 1, 2023 How to Easily Create a KMM Project Given these benefits, would you like to create a KMM project and give it a try? The following is a guide on how to do this. Download the latest Android Studio. In Android Studio, select File > New > New Project. Select Kotlin Multiplatform App in the list of project templates, and click Next. Specify the Name of the new project and click Next. In the iOS framework distribution, select the Regular framework. Keep the default names for Applications and Shared folders. Click Finish. -- JetBrains "Create your first cross-platform app | Kotlin. (n.d.). Kotlin Help." Accessed June 1, 2023 Mobile App Architecture Using KMM The following graph is an example of one of common KMM patterns. This architecture takes full advantage of KMM's characteristic code sharing. Data persistence, including cache, database, network, use cases, and view model are all implemented in KMM. For UI, both Android and iOS use native UI components. Support is provided for both older frameworks such as XML and UIKit, and newer frameworks such as Jetpack Compose and SwiftUI. This architecture allows business logic modules written in Kotlin to be imported into iOS as SDKs. This allows iOS developers to focus on UI development for efficient development. Here's some iOS code for a simple screen with an FAQ list. Except for the common UI Utility Class, this is all that needs to be implemented. #FaqView.swift struct FaqView: View { private let viewModel = FaqViewModel() @State var state: FaqContractState init() { state = viewModel.createInitialState() } var body: some View { NavigationView { listView() } .onAppear { viewModel.uiState.collect(collector: Collector<FaqContractState> { self.state = $0 } ) { possibleError in print("finished with possible error") } } } private func listView() -> AnyView { manageResourceState( resourceState: state.uiState, successView: { data in guard let list = data as? [Faq] else { return AnyView(Text("error")) } return AnyView( List { ForEach(list, id: \.self) { item in Text(item.description) } } ) }, onTryAgain: { viewModel.setEvent(event: FaqContractEvent.Retry()) }, onCheckAgain: { viewModel.setEvent(event: FaqContractEvent.Retry()) } ) } } That's not all about KMM. KMM has even more potential! Architecture That Shares UI Code In addition to business logic code, KMM can also share UI code using Compose Multiplatform. As we discussed earlier, Kotlin Multiplatform Mobile (KMM) is primarily used for implementing shared business logic, but it also supports shared UI development. Compose Multiplatform is a declarative framework for sharing UI across multiple platforms using Kotlin. Based on Jetpack Compose, it was developed by JetBrains and open source contributors. Combining KMM with Compose Multiplatform allows for the building of both logic code and UI using the Kotlin language. Reference: -- JetBrains "Kotlin brand assets | Kotlin. (n.d.-c). Kotlin Help." , "Compose multiplatform UI framework | JetBrains. (n.d.). JetBrains: Developer Tools for Professionals and Teams." Accessed June 1, 2023 Comparison of Different Patterns of KMM Architecture Assuming a mobile project is being developed, the estimated workloads for each client are as follows: UI: 2 people, Presentation: 1 person, Business/Domain: 1 person, Data/Core: 1 person The workloads saved from this point is based on the percentage of code written by KMM. Pattern A B C D UI 2*2 2*2 2*2 2 Presentation 1*2 1*2 1 1 Business/Domain 1*2 1 1 1 Data/Core 1 1 1 1 Total 9 8 7 5 workload cost -10% -20% -30% -50% KMM can reduce workloads by up to 50%. The biggest advantage of KMM compared to other cross-platform solutions is its flexibility in code sharing. How much code to share with KMM is entirely up to us. Other cross-platform solutions do not offer this level of flexibility. Summary Cons of KMM Of course, every tool has its drawbacks. KMM is no exception. Limited platform support: Kotlin Multiplatform Mobile can target multiple platforms, but not all platforms are supported. For example, it does not currently support web or desktop applications. Learning cost: If you are not familiar with Kotlin, there is a learning cost to effectively use it for multi-platform development. Framework compatibility: Kotlin Multiplatform Mobile can be used with various frameworks, but is not compatible with all of them. This limits your options and may require you to work within certain constraints. Maintenance overhead: Maintaining a multiplatform codebase can be more complex than maintaining a separate codebase for each platform. This added complexity can lead to increased overhead in testing, debugging, and maintenance. Tool limitations: Some tools and libraries may not be compatible with Kotlin Multiplatform Mobile, making development more complicated or requiring the search for alternative solutions. Applications As mentioned above, integrating KMM's architecture into a project can be considered in various situations, each with it’s pros and cons. Pattern A B C D General existing project ✓ ✓ ✓ ? Simple existing project ✓ ✓ ✓ ✓ Complex existing project ✓ ✓ ✓ ✗ New project ✓ ✓ ✓ ✓ Prototype ✓ ✓ ✓ ✓ With the technical benefits covered, let's get back to the actual development process. Like most mobile development teams, ours is small. Given our limited engineering resources, when faced with a significant change, such as upgrading from version 1.0 to 2.0, we need to collaborate with other divisions and both onsite and offshore outsourcing teams to ensure a quick release. However, there are several problems in this process: Seamless collaboration between different teams is challenging. With more developers and different teams in different offices, communication costs increase. It becomes difficult to maintain consistency across different teams. Working with external teams makes it difficult to manage the security of sensitive information. KMM can address almost all of these problems by developing core modules, defining protocols, and adopting a separate approach for UI and logic development: Allows each team to focus on their part. Can greatly facilitate collaboration. Reduces the time and cost required for communication. By having the core modules developed by the KMM team on a consistent basis, most inconsistencies are eliminated in advance. Although KMM supports a single codebase, the separation of the UI and logic layers allows for the use of multiple repositories. The core modules are developed by the KMM team and the SDK is provided to external teams. This eliminates the need for the source code to be disclosed to external teams and reduces the risk of leaking confidential information. This is difficult to achieve with other cross-platform technology solutions. In conclusion, it can be said that KMM brings significant benefits not only in terms of technical advantages but also in fostering cooperation across divisions and companies. Conclusion Given the importance of KMM in new projects and its potential for significant workload savings, we have already integrated KMM into new projects for the next major release. We will continue to monitor new technologies and tools related to KMM and seek opportunities to further enhance efficiency.
アバター
Introduction Hello. My name is Nakaguchi and I am the team leader of the iOS team in the Mobile Development Group. In my day-to-day job, I'm involved in iOS development for: KINTO Kantan Moushikomi App (KINTO Easy Application App) Prism Japan ( Smartphone app version / Recently released web version ) As a retrospective of iOSDC Japan 2024, held from Thursday, August 22nd to Saturday, August 24th, we hosted the [iOSDC JAPAN 2024 AFTER PARTY] on Monday, September 9th, 2024. I'd like to reflect on why I held the event, the preparations leading up to it, and how it went. In particular, regarding the part about "why it was held," I will present my own thoughts, and I would be happy if many people can relate to them. This blog is for: Those who participated in this event Those who attended iOSDC Those who often participate or would like to participate in events Those who organize or would like to organize events I'm also writing this as a Tech Blog post to share my experience with as many people as possible, because my own motivation has exploded by hosting this event. Why I Held the Event This event has been planned in my mind since around April. If you ask me why I planned it, I honestly don't think I'd be able to express it in words. Since I took on the role of team leader in October of last year, I have made an effort to attend many events that interest me, not only those related to iOS, but also those related to development productivity, organizational management, engineering managers, and so on. In the midst of this, I noticed the following feelings arising. Participating in an event really boosts your motivation. The people who speak at events and organize them are so cool! If I had to put my feelings into words, it would be: "It's kind of cool! I want to host an event myself!" That's how I felt back in April. However, the purpose of an event that involves investing a lot of resources, such as money, time, and people, cannot be explained simply by "because it's cool." After that, I begin to struggle within myself about the significance of hosting an event. Even now that the event has ended, I don't think I've yet reached a clear answer. (I'm just grateful that we were able to hold the event under such ambiguous circumstances.) When hosting an event as part of an organization, certain expectations are inevitably placed upon you. Commonly mentioned benefits include "increasing the organization's presence," "spreading the word about services," "leading to recruitment," etc. I think these are all great benefits of holding an event properly, and if these results appear, I think the event can be called a great success. However, there are some aspects that I personally don't feel quite right about. I believe that most participants in IT industry events attend for the purpose of self-improvement, such as "I want to acquire new knowledge," "I want to expand my network," or "I enjoy participating in the event itself," and I think it is very rare for people to attend events because they want to know what kind of organization the organizer is, what services they offer, or want to change jobs to that company. In the midst of this, after struggling with the significance of holding events, I came to my own conclusion. "I want my motivation to be contagious to as many people as possible." As I mentioned above, when I participate in an event, I feel a huge boost in motivation, and I think many others feel the same way. I believe that if there is even one more person who wants to work harder tomorrow, the accumulation of those efforts will lead to the betterment of the world. Also, as motivation increases, some people may want to host events like I did, or speak at one. In turn, others will see this and want to do the same. I believe that good motivation like this is surely contagious! So, at this stage, I decided to hold this event with the thought that "I want my motivation to be contagious to as many people as possible" as the significance of the event (although I hadn’t organized my thoughts to this extent back in April when I first came up with the idea). (And, from an organizational perspective, the mere fact that it increases motivation does not mean that we should start holding events one after another, so it looks like the days of struggle will continue for a while.) Next, I would like to give you an overview of this event. Event Overview Event name: iOSDC JAPAN 2024 AFTER PARTY Date and time: Monday, September 9, 2024 from 19:00 Participants: Around 20 people Three companies: WealthNavi, TimeTree, and us held a joint meeting as the iOSDC Retrospective. There were three LT presentations, one from each company, plus a panel discussion with three people, one from each company. Now, let me introduce the process leading up to this event. Until the Event In April, I came up with the idea to hold a mobile development-related event, but I was unsure how to proceed. We have a Developer Relations Group (DevRel) that provides support for event management, so I thought that if I reached out to them, I could run the event smoothly without any issues. On the other hand, Attracting attendees Calling for speakers Deciding on the theme of the event are challenging even with the support of our Developer Relations Group. Therefore, we've determined that organizing a mobile-related event on our own would be difficult. Under this circumstance, we wanted to ask Findy for their help, as they put a lot of effort into hosting events and have extensive know-how in attracting attendees and recruiting speakers. So, we attended this event which was held in May . I have also posted a blog Event Participation Report , so please take a look. This event gave us the opportunity to exchange information with the person in charge at Findy. After much discussion about what kind of event to hold, we were introduced to WealthNavi and TimeTree, and decided to hold an iOSDC retrospective event. I want to extend my thanks to Findy for their advice and help in organizing the event, and to WealthNavi and TimeTree for co-hosting the event. After the three companies decided to hold an iOSDC Retrospective, many things were decided smoothly, including: How to structure the event Speakers and panelists for the panel discussion Date and time of the event Now that the event recruitment page on Connpass has been successfully completed, the next step is to recruit participants. This time, all three companies shared the desire to place emphasis on communication with event participants, so the event was offline only. Since the event was held in our company's event space, we aimed to recruit around 30 people, given the capacity. We opened the Connpass page on Thursday, August 8th, 2024, and within a few days we had about 10 people register to attend, which we thought was a good number of participants. However, the actual event promotion would take place from August 22nd to 24th, when the iOSDC would be held, so I thought it would be up to us to see how much we could increase participation during that period. This year, we displayed our first sponsor booth, which allowed us to promote the event there and carry out PR by posting on our official X page during the iOSDC period. As a result, the number of registrations that increased during the iOSDC period was **"0"** ...! *To be honest, I was lazy about the event's call for participants.* Looking back, I think there was a need to improve the way we promoted the event at the sponsor booth. Rather than just handing out flyers, we should have put more thought into creating a flow of people to register on the spot (for example, handing out novelties to people who registered). Here is a reminder for next time. In fact, when we checked the statistics on the event page on Connpass, we could see that there were absolutely no registrations between August 22nd (Thu) and 24th (Sat), and that there was no increase in page views at all. ![](/assets/blog/authors/nakaguchi/2024-09-12-after-iosdc/connpass.png)*Statistics confirmed by Connpass* After that, up until Monday, September 9th, participants gradually registered at the pace shown in the image above. I also had the opportunity to take the time to announce the event when I attended another company's event, so we were able to have 24 participants registered as of the day of the event. I felt that the theme of "iOSDC Retrospective Event" was effective in drawing in a certain number of people. Although we did not reach our initial goal of 30 registrations, I personally felt that the number of registrants was more than sufficient for the first organized event. Now all that was left was to wait for the day. On the Day of Event These kinds of events are often subject to cancellation on the day of the event for a variety of reasons. In fact, several participants unfortunately canceled on the day of this event as well. However, with the day arriving, I didn't have the time to be overly excited or upset about the increase or decrease in the number of participants. We focused on making this an event that was worth attending for co-hosts WealthNavi and TimeTree, as well as for all participants who joined us on the day. Here's a quick look back at what happened on the day. We waited nervously for everyone to arrive. The venue seemed to be set up. Venue set-up completed It was 7pm, and with WealthNavi, TimeTree, and all the participants present, the first LT session was about to begin. "DX: Digital transformation starting with Package.swift" presented by Muta-san from WealthNavi. Muta-san's presentation I learned a lot from his explanation of the basics of Swift Package Manager, including aspects that I thought I knew but actually did not. I believe it was a valuable opportunity to hear about the initiatives of WealthNavi and what they envision for the future. I also learned a lot from the explanation of Swift 6, which is coming up soon. Next is the second LT. "Morphological Analysis of iSODC Proposals to Explore Trend Transitions" presented by Sakaguchi-san from TimeTree. Sakaguchi-san's presentation I was very interested in this presentation from the moment I saw the title. I have attended iOSDC several times in the past, and I feel that there are certain trends in the sessions, which was interesting to see reflected in the proposals. In addition, this analysis tool was created using Xcode, and it was fun to see it being demonstrated on a simulator during the presentation. Next is the third LT. "I want to share what we did before our first exhibit at iOSDC" presented by Hinomori-san from KINTO Technologies. Hinomori-san's presentation Since this was our first time exhibiting at a sponsor booth, he shared the challenges we faced during the preparation period. I was also involved in preparing some of the exhibits, and it was quite difficult to figure out through trial and error what kind of content would resonate with visitors and how to make it more visually appealing. Please take a look at what we produced as a sponsor, which is introduced in more detail on the Tech Blog here . Next, there was a panel discussion, followed by a break and a toast. The panelists are: Cho-san from WealthNavi Masaichi-san from TimeTree Hinomori-san from KINTO Technologies And I was the moderator of the session. Panel Discussion Members These topics were prepared in advance as we looked back on the iOSDC. The topics were decided after interviewing the panelists in advance to find out what kind of content they would be interested in. Panel Discussion Topics Due to time constraints, we were unable to discuss all the topics, but we made a conscious effort to proceed by observing the topic at hand and picking out topics that fit the flow of the moment. They talked about the status of iOS development at each company, their efforts towards iOSDC, and the changes this year compared to previous years. Panelists Finally, a group photo was taken with all participants. Group photo Thoughts After the Event As I mentioned at the beginning, I started planning for this event around April and was able to hold it. I was constantly anxious about whether the event could be held smoothly, whether the participants would show up, and whether my moderation on the day would go well. I personally feel that we were able to hold a very successful event, thanks to the cooperation of WealthNavi and TimeTree, our co-hosts, as well as the support of the Developer Relations Group and the organizing staff on the day of the event. Of course, everyone who participated on the day made the event a great success. I would like to express my sincere gratitude to everyone who was involved in this event. ● What I liked It was invaluable to be able to connect with other companies such as WealthNavi, TimeTree, and Findy when hosting the event. Additionally, this was my first time organizing an event, I gained confidence from being able to successfully complete it. ● What I'd like to improve in the future As I mentioned earlier, I find it quite challenging to attract participants. Since I haven't found a good solution to this yet, I'd like to carefully consider it with everyone involved the next time we organize the event. I also wish more team members from our iOS team could have participated this event. At this event, Assistant Manager Hinomori-san took the stage as a LT speaker and panelist. While he usually has many opportunities to speak at events, I wanted to encourage team members who don't often get the chance to take on that challenge. However, when we reached out for speakers within the company, there were no volunteers from the team members, so we decided to have Hinomori-san take the stage. I personally feel that there are major areas for improvement going forward, such as making efforts to lower the hurdles to speaking at the internal recruitment stage and establishing a support system for preparing for speaking sessions. Conclusion In October, we are planning to hold a review event for Doroidkaigi 2024 together with WealthNavi and TimeTree, and we hope to continue holding such events on an irregular basis in the future. As I said at the beginning, "I want to spread motivation to as many people as possible," and I feel that the person who was most motivated by this event was none other than myself. If there were participants who felt that their motivation had increased, then I would consider this event a great success. I'd like to continue to motivate everyone involved through various activities, including holding events like this one.
アバター
Hello! I’m Wada ( @cognac_n ), an AI Evangelist at KINTO Technologies. How do you manage your prompts? Today, I will introduce Prompty, a tool that simplifies creating/editing , testing , implementing , and organizing prompts! 1. What is Prompty? Prompty is a tool designed to streamline prompt development for large language models (LLMs). This enables centralization of prompts and parameters in YAML format, making it ideal for version control on GitHub and improving collaboration in team environments. Using the Visual Studio Code (VS Code) extension can greatly improve the efficiency of prompt engineering. Benefits of Introducing Prompty Although integration with Azure AI Studio and Prompt Flow offers benefits, this article will focus on the integration with VS Code. Who should consider using Prompty: Those looking to speed up prompt development Developers who need version control for prompts Teams collaborating on prompt creation Anyone wanting to simplify prompt execution on the application side https://github.com/microsoft/prompty 2. Prerequisites Requirements (at the time of writing) Python 3.9 or higher Vs Code (if using the extension) OpenAI API Key or Azure OpenAI Endpoint (depending on the LLM in use) Installation and initial setup Install the VS Code extension https://marketplace.visualstudio.com/items?itemName=ms-toolsai.prompty Use pip or other package managers to install the necessary libraries pip install prompty https://pypi.org/project/prompty/ 3. Try It Out 3-1. Create a New Prompty File Right-click in the Explorer tab and select "New Prompty" to create a template. ![New Prompty](/assets/blog/authors/s.wada/20240821/image_2.png =350x) New Prompty The created template is as follows: --- name: ExamplePrompt description: A prompt that uses context to ground an incoming question authors: - Seth Juarez model: api: chat configuration: type: azure_openai azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT} azure_deployment: <your-deployment> parameters: max_tokens: 3000 sample: firstName: Seth context: > The Alpine Explorer Tent boasts a detachable divider for privacy, numerous mesh windows and adjustable vents for ventilation, and a waterproof design. It even has a built-in gear loft for storing your outdoor essentials. In short, it's a blend of privacy, comfort, and convenience, making it your second home in the heart of nature! question: What can you tell me about your tents? --- system: You are an AI assistant who helps people find information. As the assistant, you answer questions briefly, succinctly, and in a personable manner using markdown and even add some personal flair with appropriate emojis. # Customer You are helping {{firstName}} to find answers to their questions. Use their name to address them in your responses. # Context Use the following context to provide a more personalized response to {{firstName}}: {{context}} user: {{question}} In the area enclosed by --- , specify parameters. Below this section, add the main content of the prompt. You can define roles using system: or user: . Basic Parameter Overview Parameter Description name Specifies the name of the prompt description Provides a description of the prompt authors Includes information about the prompt creators model Details the AI model used in the prompt sample If the prompt contains placeholders such as {{context}} , the content specified here is substituted during testing 3-2. Configuring API Keys and Parameters There are several ways to set the required API keys, endpoint information, and execution parameters. [Option 1] Specifying in the .prompty file This involves directly adding these details to the .prompty file. model: api: chat configuration: type: azure_openai azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT} azure_deployment: <your-deployment> parameters: max_tokens: 3000 You can also reference environment variables, such as ${env:AZURE_OPENAI_ENDPOINT} . However, please note that azure_openai_api_key cannot be configured in this way. ![azure_openai_api_key cannot be written directly in the .prompty file] (/assets/blog/authors/s.wada/20240821/image_3.png =750x) azure_openai_api_key cannot be written directly in the .prompty file [Option 2] Configuring with settings.json Another approach is to use VS Code’s settings.json . If the settings are incomplete and you click the play button in the upper-right corner, you will be prompted to edit settings.json. You can create multiple configurations beyond the default definition and switch between them during testing. When type is set to azure_openai and api_key is left empty, the process will direct you to authenticate using Azure Entra ID, as explained later. { "prompty.modelConfigurations": [ { "name": "default", "type": "azure_openai", "api_version": "2023-12-01-preview", "azure_endpoint": "${env:AZURE_OPENAI_ENDPOINT}", "azure_deployment": "", "api_key": "${env:AZURE_OPENAI_API_KEY}" }, { "name": "gpt-3.5-turbo", "type": "openai", "api_key": "${env:OPENAI_API_KEY}", "organization": "${env:OPENAI_ORG_ID}", "base_url": "${env:OPENAI_BASE_URL}" } ] } [Option 3] Configuration with a .env file By creating a .env file, environment variables can be read directly from it. Note that the .env file must be located in the same directory as the .prompty file you are using. This setup is especially convenient for local testing. AZURE_OPENAI_API_KEY=YOUR_AZURE_OPENAI_API_KEY AZURE_OPENAI_ENDPOINT=YOUR_AZURE_OPENAI_ENDPOINT AZURE_OPENAI_API_VERSION=YOUR_AZURE_OPENAI_API_VERSION [Option 4] Configuring with Azure Entra ID By signing in with an Azure Entra ID that has the appropriate permissions, you can access the API. I haven’t tested this option yet 3-3. Running Prompts in VS Code You can easily execute prompts by clicking the play button in the upper right corner. The results are displayed in the OUTPUT section. To view raw data, including placeholder substitution and token usage, select "Prompty Output(Verbose)" from the dropdown in the OUTPUT panel. This option is useful for checking detailed information. Use the Play button in the upper right to run the prompt Results can be seen in the OUTPUT section 3-4. Other Parameters Various parameters are introduced on the following page. Defining options like inputs and outputs , especially when using json mode, improves prompt visibility, so be sure to set them. inputs: firstName: type: str description: The first name of the person asking the question. context: type: str description: The context or description of the item or topic being discussed. question: type: str description: The specific question being asked. 3-5. Integrating with an Application The syntax for integration may vary depending on the library used in your application. As Prompty is frequently updated, be sure to check the latest documentation regularly. Here’s an example code snippet demonstrating the use of Prompty with Prompt Flow . This allows for simple prompt execution. from promptflow.core import Prompty, AzureOpenAIModelConfiguration # Set up configuration to load Prompty using AzureOpenAIModelConfiguration configuration = AzureOpenAIModelConfiguration( azure_deployment="gpt-4o", # Specify the deployment name for Azure OpenAI api_key="${env:AZURE_OPENAI_API_KEY}", # Retrieve the API key from environment variables api_version="${env:AZURE_OPENAI_API_VERSION}", # Retrieve the API version from environment variables azure_endpoint="${env:AZURE_OPENAI_ENDPOINT}", # Retrieve the Azure endpoint from environment variables ) # Configure to override model parameters # Here, max_tokens is overridden as an example override_model = {"configuration": configuration, "max_tokens": 2048} # Load Prompty with the overridden model settings prompty = Prompty.load( source="to_your_prompty_file_path", # Specify the Prompty file to use model=override_model # Apply the overridden model settings ) # Execute prompty result = prompty( firstName=first_name, context=context, question=question # Execute Prompty based on the provided text and obtain the result 4. Summary Prompty is a powerful tool that can significantly streamline prompt engineering tasks. In particular, the development environment integrated with VS Code allows for seamless creation , testing , implementation , and management of prompts, making it highly user-friendly. Mastering Prompty can greatly enhance the efficiency and quality of prompt engineering. I encourage everyone to give it a try! Benefits of Introducing Prompty (Repost) We Are Hiring! At KINTO Technologies, we are seeking colleagues to help drive the adoption of generative AI in our business. We are open to casual interviews, so if you’re even slightly interested, please contact us via the link below or through X DM . We look forward to hearing from you! https://hrmos.co/pages/kinto-technologies/jobs/1955878275904303115 Learn more about how we work with generative AI here. https://blog.kinto-technologies.com/posts/2024-01-26-GenerativeAIDevelopProject/ Thank you for reading this far.
アバター
はじめに こんにちは、KINTOテクノロジーズ(以下KTC)プラットフォームグループでSREチームのリーダーをしている長内です。この記事ではSREチームのミッションとビジョンを策定したお話を書きたいと思います。 ※決まったミッションだけ見たい方は こちら へ飛んでください。 なぜミッション、ビジョンを決めたか 今回ミッションとビジョンを決めるに至った経緯として大きく3つあります。 1. チームメンバーからの提言 SREチームは2021年の1月に発足しましたが、紆余曲折を経て今年の3月末の時点では1人体制でした。そんな中、4月に入社したメンバーが「SREチームのミッション、ビジョン作りませんか?」と提案してきたところが発端になります。彼の前職では会社のミッションやビジョンが日常の業務に根付いており、うまく機能していたとのことでしたが、自分としてはこの時点ではあまり必要性を理解しておらず、時間があったらやろうねくらいの気持ちでした(ごめんね)。 2. チームとしてのロードマップを上位層に示す必要性 そんな中、SREチームは2人体制でリソースに余裕がなかったため、採用強化のための施策を打とうとしていました。 その過程で、SREチームはどういうことを実現したくて、そのためにどういった課題感があり、それを解決するためにどういった人がどれだけ必要なのかを明確に上位層に伝える必要性が出てきました。 それに伴い、SREチームのロードマップを作成することになったのですが、もう少し抽象的なレイヤーとしてSREチームの活動の指針となるミッションやビジョンがあった方が良いよね、となってきました。 3. SREという言葉の多様化 そんなことから色んな企業のSREチームのミッションやビジョンを眺めていると、こんなスライドがありました。 SREは何を目指すのか より 個人的にはこれが一番しっくりきました。KTCにはSREチームの他に、Platform Engineering、Cloud Infrastructure、DBRE、CCoE、Securityなどの横断部署がチームやグループとして存在しています。SREという言葉が扱う内容は非常に広いと思うので、こうした周りの環境がある前提で自分たちがやるべきことを明確化しておく必要性を感じました。 どう決めたか ということで、ミッションとビジョンを決めよう!となりましたが、どう決めるかの指針もなく、手探りでのスタートでした。 まず、どれくらいの期間でどれくらい時間を使って決めるかという点ですが、週次MTGで徐々に決めるのはだらだら続きそうということもあり、ある程度短期間で決めようとなりました。 1日みっちり時間を取って実施することも考えましたが、アイデアがその日のコンディションに左右されるところもありそうなので、1日1時間を土日を挟んだ5営業日で実施することにしました。実際にやってみての感想ですが、複数日で実施した方が個人的には良いと感じました(入浴中や寝る前にアイデアが湧いてきたりしました)。 続いてどのように決めたかですが、 Google re:Work を参考に作ることにしました。 この中のマネージャーのテーマにある チームのビジョンを設定してメンバーに伝える に沿って進めていきました。 今回はミッション、ビジョンの策定が目的のため、当該コンテンツのコアバリュー、パーパス、ミッションの決定までに留めました。 ビジョンに関しては「ミッションを実現した時に、SREチームひいては会社全体がどういう姿になっていてほしいか?」という観点で、決まったミッションを元に考えることにしました。 1日目 大切にしたい価値観の洗い出し まず、初日はチームメンバー各々が大切にしたい価値観の洗い出しから始めることにしました。コラボレーションツールにはmiroを使用し、技術的でないものも含めてどういったことを大切にしたいかを各々付箋に書いていきます。大切にしたいものと言うとなかなかアイデアが出なかったりするので、逆にこういった状態になると嫌だよねという観点から見つける方法も有効だと思いました。 続いて各々が高く評価している人物の価値観をざっくばらんに話しました。 2日目 チームのコアバリューを深掘り 2日目は初日に出した価値観についてそれぞれ話しました。異なる内容でも、なぜなぜ分析のように「何でその価値観を大事にしたいのか?」といった感じで少しずつ抽象化していくと似たような価値観に行き着いたりしたので、そういったものをメモしながら進めるとミッションを決める時に役立つかもしれません。 続いて共感できる価値観についての説明や具体的な行動について考えていきます。今回は2人で実施しているので、互いの価値観から共感できるものをいくつかピックアップし、それらの価値観について深掘りしました。一例として「良いコラボレーションでより良いアウトプットを」といった価値観がありましたが、少し抽象的な表現だったので良いコラボレーション、良いアウトプットとは何だろうか?と具体的な言葉に置き換えていくとイメージが湧いてきました。 3日目 チームの存在理由を検討 3日目はパーパス(チームの存在理由)について考えました。 このチームの存在理由は? と題した6つの質問に対して議論しながら回答を作っていきます。 ちょっと注意した方がいいと感じたのは、これらの質問は現状についてのものになるので、根本的な存在理由の回答にバイアスがかかる可能性がある点です(特に現状の組織に変化をもたらしたい場合)。 今までやってきたことなどを振り返り、抽象化してなぜそれをやってきたのかを改めて見つめ直すことで、根本的な存在理由になりそうなものが見えてきました。 4日目 ミッションの決定 4日目はついにミッションを決めていきます。まずは 自己省察 として3つの質問に各々が考えを付箋に書き出しました。 そして、1日目からの内容を踏まえてミッションを決めていきました。 正直言葉が降りてくるか勝負なところはありましたが、これまで実施した内容や会話などからキーワードとなるような言葉をピックアップし、それを満たすような表現にしました。 また、それが5つのミッションの特性を満たしているかを検討して決定しました。 5日目 ビジョンの決定 5日目は決定したミッションを実現した時に、SREチームひいては会社全体がどういう姿になっていてほしいかを想像してビジョンを決定しました。 実際に作成したMiroボード(雰囲気のみ) 決まったミッション、ビジョン 実際に決まったミッションとビジョンが以下になります。 ミッションついて説明すると、まず「プロダクトを最速で提供できるようにする」という部分ですが、KTCでは様々なプロダクトが存在しています。それらができるだけ早くユーザーの元に機能を提供できるようにし、フィードバックを得られる環境を作っていきたいと考えています。 ですが、早く提供するだけでは不十分で、ユーザーにとって「価値ある」プロダクトを届ける必要があります。 また、いくら価値のあるプロダクトを早く提供できても、ユーザーが満足して使えない状態では意味がないので「信頼性の高い」という言葉を付け加えました。 ビジョンについては、信頼性の高い価値あるプロダクトを最速で提供できるようになったKTCではどのような状態になっているかを想像し、「信頼性の高い価値あるプロダクト」という品質面と「最速で提供できる」という速度感を両立させるために必要なのは、サービスレベルに基づいて開発と運用のバランスを取ることだという結論になりました。 終わりに 無事にチームのミッションとビジョンを策定できました。まだ策定して間もないですが、やろうとしていることに対してミッションと照らし合わせてこれは本当にやるべきなのか、やる場合にどこまでをやるべきかといった会話が生まれるようになり、チームの指針としてうまく機能してくれそうな予感がしています。 ですが、策定したからといって終わりではありません。ミッション、ビジョン実現のためのロードマップを作成し、チーム一丸となって取り組んでいきたいと思っています。 また、SREチームでは一緒に働く仲間を募集しています。少しでも興味を持った方はぜひお気軽にご連絡ください。お待ちしております! https://hrmos.co/pages/kinto-technologies/jobs/1811937538128224258
アバター
Introduction Hello! My name is Romie. I’m in the Mobile App Development Group, and I’m responsible for developing the my route app for Android. At KINTO Technologies Corporation (KTC), we have access to Udemy Business accounts, giving us access to a wide range of courses! This time, I chose the course Kotlin Coroutines and Flow for Android Development . Taught entirely in English, it covers the basics of asynchronous processing in Android, and demonstrates how to use Coroutines and Flow. Reflections on the Course Here are my honest impressions of the course: The English is straightforward and easy to understand. Aside from Android-specific terms, there are almost no difficult words. So, I highly recommend this course for anyone who has moved beyond the beginner stage and wants to learn more about asynchronous processing, Coroutines, and Flow, while also practicing their English ! Topics that left an impression on me Coroutines and Flow differ from traditional asynchronous processing because they run outside the main thread, making asynchronous tasks easier to write. Additionally, because Coroutines and Flow are part of Kotlin's standard library, there’s no need to include any additional libraries, which is a significant advantage! While these are just the basics, I’ve highlighted the key points below for future reference. Callback A callback is a basic method for handling asynchronous processes. You can branch the process using onResponse/onFailure . exampleCallback1()!!.enqueue(object : Callback<Any> { override fun onFailure(call: Call<Any>, t: Throwable) { println("exampleCallback1 : Error - onFailure") } override fun onResponse() { if (response.isSuccessful) { println("exampleCallback1 : Success") } else { println("exampleCallback1 : Error - isSuccessful is false") } } }) RxJava In RxJava, you can branch the process within subscribeBy using onSuccess and onError. exampleRxJava() .flatMap { result -> example2() } .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .subscribeBy( onSuccess = { println("Success") }, onError = { println("Error") } ) .addTo(CompositeDisposable()) async/await With async/await, asynchronous processing is handled, and awaitAll is used to gather and process the results together. This is a commonly used pattern in traditional asynchronous processing workflows. viewModelScope.launch { try { val resultAsyncAwait = awaitAll( async { exampleAsyncAwait1() }, async { exampleAsyncAwait2() }, async { exampleAsyncAwait3() } ) println("Success") } catch (exception: Exception) { println("Error") } } viewModelScope.launch { try { val resultAsyncAwait = exampleAsyncAwait() .map { result -> async { multiExampleAsyncAwait() } }.awaitAll() println("Success") } catch (exception: Exception) { println("Error") } } withTimeout withTimeout performs timeout processing. In withTimeout, an exception is thrown when a timeout occurs. viewModelScope.launch { try { withTimeout(1000L) { exampleWithTimeout() } println("Success") } catch (timeoutCancellationException: TimeoutCancellationException) { Println("Error due to timeout") } catch (exception: Exception) { println("Error") } } withTimeoutOrNull withTimeoutOrNull also handles timeouts, but unlike withTimeout, it returns null. viewModelScope.launch { try { val resultWithTimeoutOrNull = withTimeoutOrNull(timeout) { exampleWithTimeoutOrNull() } if (resultWithTimeoutOrNull != null) { println("Success") } else { Println("Error due to timeout") } } catch (exception: Exception) { println("Error") } } Database operations with Room and Coroutines When combining Room and Coroutines, start by checking if the database is empty; if it is, proceed to insert the required values. Since retrieving values from the database can potentially throw an exception, the operation is wrapped in a try/catch block. Currently, Room and Coroutines are frequently used with Flow to handle asynchronous operations in Android development. viewModelScope.launch { val resultDatabaseRoom = databaseRoom.exac() if (resultDatabaseRoom.isEmpty()) { Println("The database is empty") } else { Println("There are values in the database") } try { val examDataList = getValue() for (resultExam in examDataList) { database.insert(resultExam) } println("Success") } catch (exception: Exception) { println("Error") } } Flow This is a basic Flow setup. In onStart, the Flow emits an initial value, and in onCompletion, a log message is generated to indicate that the process has finished. sealed class UiState { data object Loading : UiState() data class Success(val stockList: List<Stock>) : UiState() data class Error(val message: String) : UiState() } val anythingAsLiveData: LiveData<UiState> = anythingDataSource .map { anyList -> UiState.Success(anyList) as UiState } .onStart { emit(UiState.Loading) } .onCompletion { Timber.tag("Flow").d("Flow has completed.") } .asLiveData() SharedFlow/StateFlow SharedFlow/StateFlow are types of Flow. Flow is converted into StateFlow using stateIn. The main difference between Flow and SharedFlow is that Flow does not retain emitted values, whereas SharedFlow does. Unlike StateFlow, which holds a single value that can be accessed directly, SharedFlow allows multiple collectors to receive emitted values. While SharedFlow can also retain values, it doesn't store the current state like StateFlow; instead, it emits a sequence of values to all collectors. sealed class UiState { data object Loading : UiState() data class Success(val stockList: List<Stock>) : UiState() data class Error(val message: String) : UiState() } val anythingAsFlow: StateFlow<UiState> = anythingDataSource .map { anyList -> UiState.Success(anyList) as UiState } .onCompletion { Timber.tag("Flow").d("Flow has completed.") }.stateIn( scope = viewModelScope, initialValue = UiState.Loading, started = SharingStarted.WhileSubscribed(stopTimeoutMillis = 5000) ) Summary Although much of the content covers the basics, the course was primarily in English, which made it take longer to go through. I believe that after gaining a better overall understanding of asynchronous processing, a second review will deepen my comprehension. However, on the second pass, it seems that studying English will take priority. Thank you for reading to the end.
アバター
Introduction Hello. I’m Shimamura from the Platform Group’s Operation Tool Management Team, where I work in platform engineering, focusing on tool development and operations. I'm Yamada, also part of the Platform Group’s Operation Tool Management Team, where I focus on developing in-house tools. At KINTO Technologies, we utilize Amazon ECS + Fargate as our application platform. For CI/CD, we use GitHub Actions. In AWS ECS’s Blue/Green deployment system, the "CODE_DEPLOY" option is primarily used for the DeploymentController, and we believe there are few real-world examples where "EXTERNAL" (third-party control) is implemented. At the CI/CD Conference 2023 hosted by CloudNative Days, we also encountered an example of migrating from ECS to Kubernetes specifically to enable Blue/Green deployments. ( Chiisaku Hajimeru Blue/Green Deployment (Blue/Green Deployment That Starts Small) .) However, we wondered if it might be possible to perform Blue/Green deployments in ECS without the limitations of CodeDeploy's conditions. We also considered that offering multiple deployment methods could benefit the departments developing applications. With that in mind, we began preparations to explore these options. Indeed, despite CODE_DEPLOY being the more common setting and limited documentation available on using EXTERNAL for this purpose, we successfully implemented a system that supports it for the application teams. We'll share this as a real-world example of implementing Blue/Green deployment using external pipeline tools with ECS (Fargate). Background Issues Relying solely on ECS rolling updates may not fully meet the requirements for future releases. It’s essential to offer a variety of deployment methods and deploy applications in a way that aligns with their specific characteristics. Solution method As a first step, we decided to introduce Blue/Green deployment on ECS. Canary releases may present challenges in the future, but since we successfully implemented Blue/Green deployment in this form, we anticipate being able to adapt it to support configurations like setting the influx rate and other parameters via the CLI. Design Checking with CODE_DEPLOY If you search for “ECS Blue/Green deployment,” you will find a wide variety of things. However, simply leaving it at that isn’t ideal, so we’d like to provide a summary of the key points and overall setup This is the configuration. You configure various settings in CodeDeploy, create a new task associated with the task definition, and adjust the influx rate according to the deployment settings. You can either switch over all at once, test a portion initially, or gradually increase the deployment—depending on your needs. Specifications we initially thought might be unattainable When we reviewed the environment and operation under CodeDeploy, certain aspects raised concerns for us. It could all come down to specific settings, so if you have any insights, please feel free to share. We plan to verify the operation by running a test system for a certain period, allowing for customer review and other checks. The system can be maintained for about a day, but the deployment will fail if the switchover button isn't pressed once that timeframe has elapsed. We’d like the option to terminate the old application at a chosen time after the switchover. In CodeDeploy, a time limit can be configured, but it doesn’t allow for arbitrary timing. Reverting back through the console appears to be a complex process. The process becomes cumbersome because, due to the permissions setup, you need to use SwitchRole to access it from the console. Overall configuration with EXTERNAL Component (Element) Name Overview Terraform A product for coding various services, AWS among them. IaC. In-house design patterns and modules are created with Terraform. GitHub Actions The CI/CD tool included in GitHub. At KINTO Technologies, we utilize GitHub Actions for tasks such as building and releasing applications We use a pipeline in GitHub Actions to deploy new applications and transition from the old ones. ECS (Elastic Container Service) We use ECS as the runtime environment for our applications For configuration, you can set the DeploymentController to ECS, CODE_DEPLOY, or EXTERNAL; this example specifically implements it with EXTERNAL. DeploymentController We view this as a kind of control plane for ECS (or at least, that’s how we see it internally). TaskSet A collection of tasks linked to the ECS services. You can create one via the CLI, but apparently not via the console. Using this enables you to create multiple task definition versions in parallel for a single service. ( CLI reference .) Setting this up requires an ALB, Target Group, and several other components, so there are quite a few configurations involved. ALB ListenerRule A rule for directing resources to Target Groups within the ALB. In Blue/Green deployment, modifying this link toggles the traffic flow between the old and new applications. Restrictions The DeploymentController in ECS can only be set during service creation, meaning it cannot be modified for existing services. When using EXTERNAL, the platform version isn’t fixed by the service; it’s specified when creating a TaskSet. The service startup type is fixed to EC2. However, if you specify Fargate when creating a TaskSet, the task will be started up with Fargate. Implementation Terraform At KINTO Technologies, we use Terraform as the IaC. We've also turned it into a module, and here, I'll outline the key points to be mindful of that arose during the module modifications. ListenerRule Using GitHub Actions, we modify the ListenerRule to update the TargetGroup, so we configure ignore_change to prevent unnecessary updates. ECS service NetworkConfiguration LoadBalancer ServiceRegisteries For EXTERNAL settings, these three options cannot be configured. If you’re using Dynamic or similar settings, ensure that these options are not created. In this case, it won’t be registered in CloudMap, so if you plan to integrate it with AppMesh or similar services, you’ll need to account for this. There’s no issue with using AppMesh for communication between ECS services, even if one of them is configured with a Blue/Green deployment setup. Since the Blue/Green deployment runs in parallel, if it were registered in CloudMap and allowed communication, it could result in unintended or erroneous access. Therefore, we believe this current setup is likely the correct behavior. IAM policy for roles for CI/CD In addition to the ECS system, various other permissions are also required. A sample is as follows. resource "aws_iam_policy" "cicd-bg-policy" { name = "cicd-bg_policy" path = "/" policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = [ "iam:PassRole" ] Effect = "Allow" Resource = "arn:aws:iam::{ACCOUNT}:role/{ROLE名}" }, { Action = [ "ecs:DescribeServices" ] Effect = "Allow" Resource = "arn:aws:ecs:{REGION}:{ACCOUNT}:service/{ECS_CLUSTER_NAME}/{ECS_SERVICE_NAME}" }, { Action = [ "ecs:CreateTaskSet", "ecs:DeleteTaskSet" ] Effect = "Allow" Resource = "*" conditions = [ { test : "StringLike" variable = "ecs:service" values = [ "arn:aws:ecs:{REGION}:{ACCOUNT}:service/{ECS_CLUSTER_NAME}/{ECS_SERVICE_NAME}" ] } ] }, { Action = [ "ecs:RegisterTaskDefinition", "ecs:DescribeTaskDefinition" ] Effect = "Allow" resources = ["*"] }, { Action = [ "elasticloadbalancing:ModifyRule" ] Effect = "Allow" Resource = "arn:aws:elasticloadbalancing:{REGION}:{ACCOUNT}:listener-rule/app/{ALB_NAME}/*" }, { Action = [ "elasticloadbalancing:DescribeLoadBalancers", "elasticloadbalancing:DescribeListeners", "elasticloadbalancing:DescribeRules", "elasticloadbalancing:DescribeTargetGroups" ] Effect = "Allow" Resource = "*" }, { Action = [ "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups" ] Effect = "Allow" resources = ["*"] }, ] }) } Please replace the ECS cluster name, ECS service name, and ALB name with the appropriate values. Ensure these align with the scope of the CI/CD roles and any applicable permissions. Permissions for CreateTaskSet and DeleteTaskSet are not restricted by specific resources. Instead, the service launched is defined by a fixed condition. The DescribeLoadBalancers permissions, along with ec2 and DescribeSecurityGroups permissions, are included in the workflow to determine status information. elasticloadbalancing:ModifyRule is, needless to say, necessary for rewriting the ListenerRule for release. The ListenerRule is scoped specifically to the ALB name since the ARNs are assigned random values. GitHub Actions At KINTO Technologies, we use GitHub Actions for our CI/CD tool. Our process involves developing standardized CI/CD workflows within the Platform Group and then supplying them to the app development teams. Workflow overview In the workflows for this project, we created a Blue/Green deployment system according to the steps below. In this article, we will only cover the deployment workflow. Key considerations and points of caution As the provider of these workflows to the app development teams, we paid close attention to the following key points: An implementation that minimizes parameter specification at runtime to reduce the risk of errors or misoperations. Since these workflows require manual execution, all parameters that can be retrieved via the CLI are gathered within the workflows themselves. This approach ensures that incorrect parameters aren’t specified at runtime. Simplified workflow setup Implementation that uses secrets as little as possible The AWS resource names are set through environment variables, with fixed values used for all except system-specific ones. This approach minimizes the need for configuration. Registering all the ARNs for the AWS resources to be used as secrets will render in-workflow processing to obtain the ARNs from the resource names unnecessary, reducing the amount of code. To minimize the initial configuration workload, we implemented a CLI-driven process that retrieves and uses ARNs from resource names, requiring almost no manual configuration. Workflow implementation Here, we would like to explain the main processes of each workflow using sample code. All the workflows are basically Get the AWS Credentials → Get the required parameters via the CLI → Do validation checks → Run or something similar. Creating the task set The runtime parameters for the workflow are the image tags and environments in the ECR (Elastic Container Registry). Before creating the task set, perform validation checks to ensure that the target group is suitable for testing and that the image tags for the runtime parameters exist in the ECR. After that, create the task definition from the image tags. Once the task definition has been created, you get the parameters (the subnets, security groups, and task definition) that will be needed when creating the task set, then run the CLI to create it. jobs: ... ## Check the target group to be used check-available-targetGroup: ... ## Create the task definition from the ECR images deploy-task-definition: ... ## Create the task set create-taskset: runs-on: ubuntu-latest needs: deploy-task-definition steps: # Get the AWS Credentials - Set AWS Credentials  ... - Get the target group ... # Create the task set - name: Create TaskSet run: | # Get the task definition ARN taskDefinition=`aws ecs describe-task-definition\ --task-definition ${{ env.TASK_DEFINITION }}\ | jq -r '.taskDefinition.taskDefinitionArn'` echo $taskDefinition # Get the subnets subnetList=(`aws ec2 describe-subnets | jq -r '.Subnets[] | select(.Tags[]?.Value | startswith("${{ env.SUBNET_PREFIX }}")) | .SubnetId'`) if [ "$subnetList" == "" ]; then echo !! Unable to get the subnets, so processing will be aborted. exit 1 fi # Get the security groups securityGroupArn1=`aws ec2 describe-security-groups | jq -r '.SecurityGroups[] | select(.Tags[]?.Value == "${{ env.SECURITY_GROUP_1 }}") | .GroupId'` if [ "$securityGroupArn1" == "" ]; then echo !! Unable to get the security groups, so processing will be stopped. exit 1 fi securityGroupArn2=`aws ec2 describe-security-groups | jq -r '.SecurityGroups[] | select(.Tags[]?.Value == "${{ env.SECURITY_GROUP_2 }}") | .GroupId'` if [ "$securityGroupArn2" == "" ]; then echo !! Unable to get the security groups, so processing will be stopped. exit 1 fi echo --------------------------------------------- echo Creating the task set aws ecs create-task-set\ --cluster ${{ env.CLUSTER_NAME }}\ --service ${{ env.SERVICE_NAME }}\ --task-definition ${taskDefinition}\ --launch-type FARGATE\ --network-configuration "awsvpcConfiguration={subnets=["${subnetList[0]}","${subnetList[1]}"],securityGroups=["${securityGroupArn1}","${securityGroupArn2}"]}"\ --scale value=100,unit=PERCENT\ --load-balancers targetGroupArn="${createTaskTarget}",containerName=application,containerPort=${ env.PORT } Switching listener rules The workflow for switching listener rules begins by retrieving and verifying the number of task sets currently running. If only the production environment’s task set is running (with a single task set), and you switch between the listener rules for the production and test environments, the task set associated with the production environment will be removed. To prevent this issue, our implementation checks the number of running task sets. If there is only one or fewer, the process halts without switching listener rules. After that, it switches between the production and test listener rules. Since there is no CLI command for switching between two listener rules, we are calling it switching, but precisely speaking, you run a CLI command that changes the listener rule (modify-rule). Since each listener rule change is processed in parallel, we use a sleep command to adjust processing times. This ensures that both listener rules don’t end up linked to the test environment due to minor timing differences. env: RULE_PATTERN: host-header ## http-header / host-header / path-pattern / source-IP, etc. PROD_PARAM: domain.com TEST_PARAM: test.domain.com ... jobs: ## If there one task set or less running, make it so that the host header cannot be changed check-taskSet-counts: runs-on: ubuntu-latest steps: ## Get the AWS Credentials - name: Set AWS Credentials ... # Validation - name: Check TaskSet Counts run: | taskSetCounts=(`aws ecs describe-services --cluster ${{ env.CLUSTER_NAME }}\ --service ${{ env.SERVICE_NAME }}\ --region ${{ env.AWS_REGION }}\ | jq -r '.services[].taskSets | length'`) if [ "$taskSetCounts" == "" ]; then echo !! Unable to get the number of running task sets, so processing will be aborted. exit 1 fi echo Number of running task sets: $taskSetCounts if [ $taskSetCounts -le 1 ]; then echo !! The number of running task sets is 1 or less, so processing will be aborted. exit 1 fi ## Switch between ALB listener rules (production, test) change-listener-rule-1: runs-on: ubuntu-latest needs: check-taskSet-counts steps: ## Get the AWS Credentials - name: Set AWS Credentials ... - name: Change Listener Rules run: | # Get the ALB ARN from the ALB name albArn=`aws elbv2 describe-load-balancers --names ${{ env.ALB_NAME }} | jq -r .LoadBalancers[].LoadBalancerArn` # Get the listener ARN from the ALB ARN listenerArn=`aws elbv2 describe-listeners --load-balancer-arn ${albArn} | jq -r .Listeners[].ListenerArn` # Get the listener rule ARN from the listener ARN listenerRuleArnList=(`aws elbv2 describe-rules --listener-arn ${listenerArn} | jq -r '.Rules[] | select(.Priority != "default") | .RuleArn'`) pattern=`aws elbv2 describe-rules --listener-arn ${listenerArn}\ | jq -r --arg listener_rule ${listenerRuleArnList[0]} '.Rules[] | select(.RuleArn == $listener_rule) | .Conditions[].Values[]'` if [ "$pattern" == "" ]; then echo !! Unable to get the listener rule, so processing will be stopped. exit 1 fi echo --------------------------------------------- echo Current rule pattern: $pattern echo --------------------------------------------- if [ $pattern == "${{ env.TEST_PARAM }}" ]; then aws elbv2 modify-rule --rule-arn ${listenerRuleArnList[0]} --conditions Field="${{ env.RULE_PATTERN }}",Values="${{ env.PROD_PARAM }}" else sleep 5s aws elbv2 modify-rule --rule-arn ${listenerRuleArnList[0]} --conditions Field="${{ env.RULE_PATTERN }}",Values="${{ env.TEST_PARAM }}" fi echo --------------------------------------------- echo Rule pattern after change aws elbv2 describe-rules --listener-arn ${listenerArn}\ | jq -r --arg listener_rule ${listenerRuleArnList[0]} '.Rules[] | select(.RuleArn == $listener_rule) | .Conditions[].Values[]' ## Switch between ALB listener rules (production, test) change-listener-rule-2: ... The processing is the same as for change-listener-rule-1, and only the specification of listenerRuleArnList elements differs ... Deleting the task set In the task set deletion workflow, the only runtime parameters are the environments. If you specify the task set ID to be deleted as a parameter, the workflow only requires a single CLI command to delete that task set ID. This simplifies the process to a single line, aside from obtaining AWS credentials and other setup steps. However, if you accidentally specify a task set ID that is currently in production, there is a risk that the production task set could be deleted, leaving only the test environment active. Therefore, we implemented a solution where the runtime parameters are limited to the environments only. The workflow retrieves and deletes the task set for the test environment directly within the workflow implementation. env: TEST_PARAM: test.domain.com # Host header for testing ... jobs: ## Delete the task set delete-taskset: runs-on: ubuntu-latest steps: ## Get the AWS Credentials - name: Set AWS Credentials ... # Get the target group linked to the test host header - name: Get TargetGroup run: | # Get the ALB ARN from the ALB name albArn=`aws elbv2 describe-load-balancers --names ${{ env.ALB_NAME }} | jq -r .LoadBalancers[].LoadBalancerArn` # Get the listener ARN from the ALB ARN listenerArn=`aws elbv2 describe-listeners --load-balancer-arn ${albArn} | jq -r .Listeners[].ListenerArn` # Get the target group linked to the test rules from the listener’s ARN and the test host header testTargetGroup=`aws elbv2 describe-rules --listener-arn ${listenerArn}\ | jq -r '.Rules[] | select(.Conditions[].Values[] == "${{ env.TEST_PARAM }}") | .Actions[].TargetGroupArn'` echo "testTargetGroup=${testTargetGroup}" >> $GITHUB_ENV # Get the task set ID linked to the test host header’s target group by the listener rules - name: Get TaskSetId run: | taskId=`aws ecs describe-services\ --cluster ${{ env.CLUSTER_NAME }}\ --service ${{ env.SERVICE_NAME }}\ --region ${{ env.AWS_REGION }}\ | jq -r '.services[].taskSets[] | select(.loadBalancers[].targetGroupArn == "${{ env.testTargetGroup }}") | .id'` if [ "$taskId" == "" ]; then echo !! Unable to find the tasked set linked to the test host header’s target group, so processing will be aborted. exit 1 fi echo The task set ID to be deleted echo $taskId echo "taskId=${taskId}" >> $GITHUB_ENV # Delete the task set from the task set ID obtained - name: Delete TaskSet run: | aws ecs delete-task-set --cluster ${{ env.CLUSTER_NAME }} --service ${{ env.SERVICE_NAME }} --task-set ${{ env.taskId }} Next steps We plan to refine the ALB ListenerRule component and explore enabling a canary release, but first, we need user feedback. For now, we are rolling it out to the application side to gather insights and improvements. In our GitHub Actions workflows, we minimized the use of secrets as much as possible. However, they still require setting numerous environment variables, and we aim to reduce this dependency in the future. For instance, we could potentially configure it so that only system-specific values are set via environment variables, minimizing the need for additional variable settings. We are also looking into whether we can switch between listener rules safely and instantaneously. Impressions As mentioned earlier, there are likely very few real-world examples of Blue/Green deployment with ECS + EXTERNAL (using GitHub Actions). We’ve reached this point by building a system from scratch, with no existing documentation to guide us. In hindsight, while implementing GitHub Actions workflows wasn’t inherently difficult, we were able to come up with several effective ideas to create workflows that are both straightforward (with minimal setup) and safe to use. Looking ahead, we aim to enhance this system by having people use it and then refining it based on their feedback Summary The Operation Tool Manager Team oversees and develops tools used internally throughout the organization. We leverage tools and solutions created by other teams within the Platform Group. Based on the company's requirements, we either develop new tools from scratch or migrate existing components as needed. If you’re interested in these activities or would like to learn more, please don’t hesitate to reach out to us. @ card
アバター
A Kotlin Engineer’s Journey Building a Web Application with Flutter in Just One Month Hello. I am Ohsugi from the Woven Payment Solution Development Group. Our team generally engages in server-side programming with Kotlin/Ktor and is currently working at Woven by Toyota Inc. on the development of the payment system used in Toyota Woven City . Working in cooperation with our in-house business teams and partners to build Woven City, we have repeatedly conducted proofs of concept (PoC) to expand the features of the payment system. Recently, we implemented a first PoC for the payment system, simulating its operation in actual retail stores. In this article, I would like to introduce how we decided to adopt Flutter to develop client apps as part of the PoC. Introduction To conduct the PoC for retail sales operations, we developed the following features to support store functions in addition to the payment system: Product management for store inventory Point-of-sale (POS) cash registers, including: Product scanning Shopping cart functionality Sales reports and payment management Inventory tracking In particular, to regularly update tens of thousands of product information items, report sales and conduct month-end inventory checks, we needed more than just a payment API —we required a GUI application accessible to non-technical store staff. This was what prompted us, who normally focuses on server-side development, to suddenly start the challenge to create a client application. Selecting a Language and Framework When developing the client application, we narrowed down our choices to a cross-platform framework that would allow for application development not only for the web but also for iOS/Android. Language / framework Reasons for selection Dart / Flutter , Flutter on the web - This is a trending technology that has been getting significant attention recently. - Having also been adopted by the in-house Mobile App Development Team, members across teams are very familiar with this language and framework. TypeScript / Expo (React Native) , Expo for web - In terms of web development, this choice would enable us to move forward with React, which is one of the most mature technologies out there. - Our team members have experience with React, so ramp-up time would be minimal. Kotlin / Compose Multiplatform , Compose for web With few existing adoption examples, we have the opportunity to explore more innovative development approaches. - There are no team members with direct development experience, but it should be straightforward for those familiar with Kotlin. Technical validation In order to select a language and framework, we conducted a technical evaluation by creating a web app that combines state management and screen transitions, which are important elements for client app development. The app we created is very simple: pressing the + or - button increases or decreases a count on the screen on the left (Home Page), and then pressing the "next" button navigates to the right screen (Detail Page), where the count is displayed. For each language/framework combination, we looked at the differences in the development experience it would offer in terms of how UI components are implemented, performance, libraries, documentation, and community support. How UI components are implemented First, we compared Flutter on the web, Expo for web, and Compose for web using the Detail Page code on the right of the image above as an example. Dart / Flutter on the web I find it very intuitive as you can implement the UI using object-oriented components rather than the DOM. You can use virtually the same code for both mobile and web apps. Material Design is applied by default for styling, which has its pros and cons, but is a real boon in situations where engineers need to handle design too. When rendering with Canvaskit, it's possible to achieve nearly identical UI appearance. class DetailPage extends StatelessWidget { const DetailPage({super.key}); @override Widget build(BuildContext context) { final args = ModalRoute.of(context)!.settings.arguments as DetailPageArguments; return Scaffold( appBar: AppBar( title: const Text("Flutter Demo at Detail Page"), ), body: Center( child: ConstrainedBox( constraints: const BoxConstraints(minWidth: 120), child: Center( child: Text( args.value.toString(), style: const TextStyle(fontSize: 72), ), ), ), ), ); } } TypeScript/Expo Like Flutter, UI can be implemented with object-oriented components instead of the DOM, making it feel very intuitive. As the downside of this combination, the framework provides the minimum components, requiring you to implement additional ones on your own. The same code can be used for both mobile and web with minimal differences. Styling is done with StyleSheet, a syntax similar to CSS, you may not feel it is so difficult as you do with CSS as the scope applied to the app is limited. The sample app uses react-navigation to implement screen transitions. const DetailPage: React.FC = () => { // from react-navigation const route = useRoute<RouteProp<RootStackParamList, 'Detail'>>(); return ( <View> <Header title={'Expo Demo at Detail Page'} /> <CenterLayout> <Counter value={route.params.value}/> </CenterLayout> </View> ); } const Header : React.FC<{title: String}> = (props) => { const {title} = props; return ( <View style={styles.header}> <Text style={styles.title}> {title} </Text> </View> ) } const CenterLayout: React.FC<{children: React.ReactNode}> = (props) => { const {children} = props; return ( <View style={styles.layout}> {children} </View> ) } const Counter: React.FC<{value: number}> = (props) => { const {value} = props; return ( <View style={styles.counterLayout}> <Text style={styles.counterLabel}>{value}</Text> </View> ) } const styles = StyleSheet.create({ header: { position: "absolute", top: 0, left: 0, width: '100%', backgroundColor: '#20232A', padding: '24px 0', }, title: { color: '#61dafb', textAlign: 'center', }, layout: { display: 'flex', flexDirection: 'row', justifyContent: 'center', alignItems: 'center', height: '100vh', }, counterLayout: { minWidth: 120, textAlign: 'center' }, counterLabel: { fontSize: 72, } }); Kotlin / Compose for web Instead of using the Compose UI used on mobile and desktop, we implement the UI using web-specific components that wrap around the HTML DOM. Code cannot be reused across mobile and web Styling needs to be implemented in CSS. For component implementation, you can either define the properties of each component from scratch or use pre-defined components as StyleSheet objects. To implement screen transitions, the sample app uses the routing-compose library for Compose Multiplatform, which supports both web and desktop. @Composable fun DetailPage(router: Router, params: Map<String, List<String>>?) { Div { components.Header(title = "Compose for web Demo at Detail Page") CenterLayout { params?.get("value")?.get(0)?.let { Counter(it.toInt()) } } } } @Composable fun Header(title: String) { H1(attrs = { style { position(Position.Fixed) top(0.px) left(0.px) paddingTop(24.px) paddingBottom(24.px) backgroundColor(Color("#7F52FF")) color(Color("#E8F0FE")) textAlign("center") width(100.percent) } }) { Text(title) } } @Composable fun CenterLayout(content: @Composable () -> Unit) { Div(attrs = { style { display(DisplayStyle.Flex) flexDirection(FlexDirection.Row) justifyContent(JustifyContent.Center) alignItems(AlignItems.Center) height(100.vh) } }) { content() } } @Composable fun Counter(value: Int) { Span(attrs = { style { minWidth(120.px) textAlign("center") fontSize(24.px) } }) { Text(value.toString()) } } Performance Next, we compared the build times and bundle sizes for the sample app created with each language and framework. In building the app, we set optimization options by default. We set up the testing environment on a MacBook Pro 2021 with an M1 Pro CPU and 32 GB of memory. Language / Framework Build conditions Build time Bundle size Dart / Flutter on the web - Flutter v3.7.7 - Dart v2.19.2 14 s 1.7 MB (CanvasKit) 1.3 MB (Html) Typescript / Expo for web - TypeScript v4.9.4 - Expo v48.0.11 10 s 500 KB Kotlin / Compose for web - Kotlin v1.8.10 9 s 350 KB As you can see, the bundle size for the sample app functionality with Flutter is about 10 times larger than that with React, which means that the initial rendering will probably take quite a long time. You can check the details of the JS code generated by Flutter by adding --dump-info to the build options, which helped us to see that the code mainly contains the Dart and Flutter framework part. Libraries, documentation, and community support Lastly, I have put together some information on the libraries, documentation, and community support among other things, for each language-framework combination. Language / Framework Libraries Documentation / Community support Dart / Flutter on the web With Flutter packages , you can search for libraries available for Flutter. Libraries marked with the Flutter Favorite logo are officially recognized for their popularity and ease of use. The official documentation and videos are comprehensive, and the website also provides recommended libraries and design guidelines for state management, etc. Typescript / Expo for web The basic libraries are fairly extensive, and the de facto standard ones are also easy to find if you search for them. The maintenance of each library relies to a large extent on the community, so you need to be careful when choosing the most suitable language-framework combination for you. For basic implementations, there is a rich selection of React official documentation and Expo official documentation . Regarding effective design guidelines, including library design ones, it looks good to us to refer to the React discussions on the web. Kotlin / Compose for web You can use a wide variety of JVM libraries. However, Android and Compose UI-related libraries are often not available in Compose for web. There is not very much documentation, so you need to either search the GitHub repositories , or search the community’s Slack channel for information. The Adoption of Flutter Based on the technical evaluation described above, we chose Flutter as the technology stack for client app development in the PoC. The reasons are as follows: Even team members unfamiliar with client app development can easily work with Flutter, as it has comprehensive documentation and reference materials, shich should minimize the impact on our primary server-side development work. The framework is actively being developed and well-maintained, so it is easy to upgrade versions and introduce libraries. Given the characteristics of the PoC, the app will run in a stable network environment, so performance limitations are not a significant concern. Additionally, though it may sound as an added justification, being able to run JavaScript on Dart was very reassuring when encountering issues that couldn't be solved with Flutter alone. Our system uses Keycloak as the authentication platform, and since Keycloak’s official repository does not currently provide a Flutter client library, we are handling the authentication by running a JS library on Dart. Conclusion In this article, I introduced the reasons behind our decision to adopt Flutter for the development of the client app used in the PoC. Currently, we are also developing the client app in parallel with our server-side development. We would like to update this blog with more information as we deepn our technical knowledge in the future.
アバター
Introduction I am Kinoshita, a prototyping engineer at KINTO Technologies. To kick off an upcoming series on Agile here on the blog, I’ll start by sharing a quick update on renewing my Scrum Inc. Registered Scrum Master qualification. If you are interested in how to become a Registered Scrum Master and what the seminar contains, please read this previous article I wrote on this topic. When I wrote the previous article, the certification was called Licensed Scrum Master (LSM), but on July 29, 2022, it was renamed to Registered Scrum Master (RSM). It seems the license name was automatically updated, as my certification had also changed to RSM (Registered Scrum Master). I’ve added this name change to the previous article as well. The Renewal A year after obtaining the license, you receive an email notifying you that its expiration date is approaching and that it will become invalid unless renewed. However, I did not notice the email myself, so I was unaware that the renewal deadline was coming up until a colleague who attended the previous seminar with me mentioned it. You have 60 days to decide whether to renew, and if you choose to proceed, you need to go to the members’ site, pay the renewal fee and unlock the renewal exam. I had initially thought the renewal would cost $50 per year, but it turns out there are also options for five-year and lifetime plans. This time, it seems there was a discount, reducing the five-year plan from $250 to $199 and the lifetime plan from $500 to $399. At KINTO Technologies, subsidies cover seminars but not certifications, so I would be covering this renewal cost myself. Even with the discount, with the yen as weak as 138 to the dollar (at the time of payment), brought the costs to approximately 27,500 yen for the $199 plan and around 55,000 yen for the $399 one . The prices didn’t seem so daunting in dollars, but once I converted them to yen, I felt a sharp pain in both my wallet and my heart. Why I Renewed Given that I had very few opportunities to apply what I had learned about Scrum and would have to pay out of my own pocket, I honestly wasn't eager to renew. In addition to the difficulty of getting stakeholders to understand Agile and dispel their resistance to it, and to the fact that the teams and groups are so large, not everyone wants to do Agile and Scrum either. It felt like a pretty tall order from the start. As a result, I strongly felt that it would be important to involve people around me in order to find like-minded individuals to help create a more supportive atmosphere for it, even if it was just a small step at a time. In the end, fostering a "Let's do it!" mindset would be far more important than whether or not I had a license. What changed my attitude—and also inspired me to write this article—was the expansion of my network within the company, which provided me with numerous opportunities to discuss Scrum Master topics with people I had never met before. One of these was a chance to talk about it with other teams in a roundtable discussion held thanks to avid Agile enthusiast Kin-chan really hitting the ground running after joining the company. Listening to them made me regret having kept it all pent up inside, and shifted me a little bit back toward wanting to figure out how to solve my own similar concerns. It dawned on me that having the license would continue to expand my circle, which might, in turn, increase the opportunities to put it to use. These thoughts made me more inclined to renew. A major factor was that, almost eerily at the last minute, a colleague at the company reached out after reading my previous article. They knew how much the renewal would cost but strongly encouraged me to go ahead and renew anyway. Prompted by this encouragement, and also figuring that if I was going to do it then I might as well go all in, I opted for the lifetime plan. Still torn between wanting to and not wanting to renew, my head was full of the pain in my wallet even during the exam, but despite that, I managed to pass it without a hitch. To ease the sting on my heart and wallet, I took one of the souvenir candies someone had left out in the office. As I ate it, I treated it like a 55,000-yen indulgence, savoring every bite. The colleague who had attended the seminar with me last time had also renewed, but apparently opted for the annual renewal. They mentioned that when they took the exam, there was quite a bit they had forgotten, so they were glad they had chosen the yearly renewal, as it gave them a chance to review everything again. About the Renewal Exam You get not just one, but two chances to take the renewal exam, just like with the exam after the seminar last time. After answering all the questions, you get shown your score and which ones you got right and wrong, so you can see where you made mistakes. The content and difficulty of the questions felt the same level as the ones after the seminar last year. If you pass, you get an email to tell you, and the expiry date displayed beneath the official mark in the bottom right of the certificate changes to “Valid Until Lifetime.” So, I am now a Registered Scrum Master for life and will never need to take the renewal exam again. I no longer have to worry about whether to renew it every time it expires. Conclusion, and a Plug A year had passed since I last took the seminar, and it was time to decide whether to renew my license. Personally, I didn’t feel that the license had proven its value over the past year. However, I started to see its worth for a reason I hadn’t originally considered: it opened up opportunities to connect with people. With that in mind, I decided to renew. At KINTO Technologies, many other teams, projects, and products are embracing the challenge of Agile and Scrum, in addition to those involved in the roundtable discussion. Our avid Agile enthusiast Kin-chan will cover these topics in the upcoming series on Agile I mentioned earlier. Kin-chan has a broad view of Agile, unbound by any specific development framework, and has already taken passionate steps to champion it within the company multiple times. Personally, I’m looking forward to the series offering a wide range of perspectives. So, it is definitely something to look forward to.
アバター
はじめに KINTOテクノロジーズ のプラットフォームエンジニアリングチームでは、現在のログソリューションに完全には満足していませんでした。そんな時、新しい AWS サービスを活用することでログプラットフォームを使いやすくし、コストも削減できるチャンスを見つけました。まさに一石二鳥です! もちろん、既存のシステムをすべて取り壊して、新しいサービスに置き換えることはできません。それは、まだ走行中の車のエンジンを交換するようなものです!新しいサービスをどのように使い、どのように設定すれば私たちのニーズに合うかを調査する必要がありました。今回の新しいログプラットフォームとして OpenSearch Serverless を検討する中で、アラートシステムに関する解決策が必要でした。現在、OpenSearch クラスターの Alerting 機能を使用していますが、この機能はサーバーレスインスタンスでは使用できません。 幸い、AWS Managed Grafana バージョン 9.4 では、Grafana の OpenSearch プラグインが OpenSearch Serverless インスタンスをデータソースとして使用できるようになっており( Grafana Opensearch プラグインページ 参照)、 Grafana をアラートのために活用できるようになりました!しかし、両方のサービスをどのように設定すればうまく連携できるかを考える必要がありました。 調査の段階では、すでにOpenSearch Serverlessインスタンスを作成し、使用したいすべてのソースからのログインジェストをテスト済みでした。残されたタスクは、サンドボックスにテスト用の Grafana インスタンスをセットアップし、サーバーレスインスタンスをデータソースとして設定することでした。 本記事を書いている時点では、AWS ドキュメントにはこの手順に関する詳細な説明がありませんでした。エンジニアとして、すべての作業にステップバイステップのガイドが用意されているわけではありません。そのため、何が機能するかを探るために試行錯誤が必要でした。さらに、必要な権限を絞り込むため、 AWS サポートに協力を求め、Amazon Managed Grafana の内部チームと OpenSearch チームの両方にもリクエストをエスカレーションしてもらいました。ドキュメントがまだ整備されていないためです。これが、今回本記事を書いて知識を共有しようと考えた理由です。 続ける前に簡単に自己紹介:KINTOテクノロジーズでプラットフォームエンジニアをしているマーティンです。昨年、このチームに参加し、それ以来 AWS を使ったプロジェクトに断続的に関わってきました。 また、このプロジェクトに取り組む中で多くのことを学びました。今回その経験を皆さんと共有できることをとても嬉しく思います! プロジェクトから得た最大の学びは、AWS サポートが素晴らしいリソースであり、困ったときには遠慮せずにサポートを依頼するべきだということです。 環境の設定 本記事では、AWS コンソールを使ってすべて設定します。もちろん、お好きな Infrastructure as Code ツールを使って同じ設定を作成も可能です。本記事では、読者が AWS コンソールに精通していて、すでに Opensearch Serverless インスタンスが稼働していることを前提としています。 なお、本記事で紹介する設定はシンプルさを優先しているため、セキュリティ要件に応じて設定を見直し、調整することを強くお勧めします。 IAM ロールの設定 最初に、Grafana インスタンスで使用するための IAM ロールを作成する必要があります。もし Grafana ワークスペースで他の AWS サービスを利用する予定がある場合、Grafana ワークスペース作成時に [サービス管理] オプションを選択したほうがよいかもしれません。その後、AWS が作成したロールを更新するか、Grafana のデータソース設定時にカスタムロールのARNを指定できます。 IAM ロールを作成する際に必要な信頼ポリシーは以下のとおりです。 { "Version":"2012-10-17", "Statement": [ { "Effect":"Allow", "Principal": { "Service": "grafana.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } AWS サービスの「Trusted entity type (信頼されたエンティティのタイプ)」で AmazonGrafana を選択することで、同じ信頼ポリシーを取得できます。(使用例のセクションで選択します。) GrafanaからOpenSearch Serverlessにアクセスするために必要な権限ポリシーは以下のとおりです。GrafanaチームとOpenSearchチームに最小限の必要な権限を提供するようリクエストをエスカレーションしてくださったAWSサポートチームに、感謝します。 { "Statement": [ { "Action": [ "es:ESHttpGet", "es:DescribeElasticsearchDomains", "es:ListDomainNames" ], "Effect":"Allow", "Resource": "*" }, { "Action": "es:ESHttpPost", "Effect":"Allow", "Resource": [ "arn:aws:es:*:*:domain/*/_msearch*", "arn:aws:es:*:*:domain/*/_opendistro/_ppl" ] }, { "Action": [ "aoss:ListCollections", "aoss:BatchGetCollection", "aoss:APIAccessAll" ], "Effect":"Allow", "Resource": [ "arn:aws:aoss:<YOUR_REGION>:<YOUR_ACCOUNT>:collection/*" ] } ], "Version":"2012-10-17" } OpenSearch アクセスポリシー OpenSearch 側では、新しく作成した IAM ロールに対してデータアクセスポリシーを追加する必要があります。IAM ロールに OpenSearch へアクセスするための必要な権限を付与していたとしても、コレクション内のデータにアクセスできるようにするためには、データアクセスポリシーを作成する必要があります。詳細については、 AWS のドキュメント を参照してください。 OpenSearch サービスページのメニューで、のサーバーレスセクションにある「データアクセスポリシー」を選択し、「アクセスポリシーの作成」ボタンをクリックします。アクセスポリシーに名前と説明を追加し、ポリシー定義方法として JSON を選択します。以下のポリシーは、Grafana Opensearch プラグインのドキュメントから引用したものです。 [ { Rules = [ { ResourceType = "index", Resource = [ "index/<NAME_OF_YOUR_OPENSEARCH_INSTANCE>/*" ], Permission = [ "aoss:DescribeIndex", "aoss:ReadDocument" ] }, { ResourceType = "collection", Resource = [ "collection/<NAME_OF_YOUR_OPENSEARCH_INSTANCE>" ], Permission = [ "aoss:DescribeCollectionItems" ] } ], Principal = [ <GRAFANA_IAM_ARN> ] Description = "Read permissions for Grafana" } ] OpenSearch Serverless デプロイメントの名前と、先ほど作成した IAM ロールの ARN を更新してください。 ネットワーキングの設定を少し行います Grafana インスタンスの作成を続ける前に、いくつかのネットワークリソースを作成します。 まず、OpenSearch Serverless デプロイメントと同じ VPC 内に 2 つのサブネットを作成しましょう。各サブネットは異なるアベイラビリティーゾーンに配置する必要があります。 サブネットが作成されたら、各サブネットのルートテーブルを更新し、0.0.0.0/0 からインターネットゲートウェイへの新しいルートを追加します。 次に、VPC からのインバウンド HTTPS トラフィックを許可し、0.0.0.0/0 へのすべてのアウトバウンドトラフィックを許可するセキュリティグループを作成します。 これらの設定が完了したら、Grafanaインスタンスを作成する準備が整いました! Grafana インスタンスの作成 コンソールの検索バーで Amazon Managed Grafana サービスを検索します。 サービスのホームページで、AWS エンジニアが配置した便利なボタンを使用して Grafana ワークスペースを作成します。 作成ページの最初のステップで、Grafana ワークスペースの名前と説明を設定します。バージョンを 少なくとも9.4 以上に設定してください。最新バージョンは10.4なので、私はそちらを使用します。 次のページの認証方法の選択では、お好みの認証方法を選んでください。私は AWS IAM Identity Center を選びます。 Permission type (権限の種類)では、Customer managed を選択し、先ほど作成した IAM ロールの ARN を指定します。Grafanaワークスペースを作成した後、選択したロールとは別のIAMロールが使われてしまうという奇妙な問題が発生しました。そのため、正しいロールを使うようにワークスペースを更新する必要がありました。これはバグか、私の設定ミスかもしれませんが、本記事では、私は間違いなく正しいロールを選んだということにしておきます。そして、バグだったということにしましょう。OKでしょうか?Great! 進めましょう! Outbound VPC connection (アウトバウンドVPC接続)のセクションでは、OpenSearch Serverless インスタンスがデプロイされているのと同じ VPC を選択します。Mapping and Security Groups (マッピングとセキュリティグループ)では、先ほど作成したサブネットとセキュリティグループを選択します。 Workspace configuration options (ワークスペース構成オプション)のセクションでは、Turn plugin management on (プラグイン管理を有効にする)を必ず選択してください。 このチュートリアルでは、 Network access control (ネットワークアクセス制御)のセクションで Open Access (オープンアクセス)を選択します。 [次へ] ボタンをクリックして、設定を確認します。 ワークスペースが作成されたら、認証方法を設定してください。私は AWS IAM Identity Center を選択したので、自分のユーザーを追加し、管理者にします。 これで接続できるようになったはずです! Grafana と OpenSearch Serverless の連携 OpenSearch Serverless データソースを追加する前に、Grafana ワークスペースに OpenSearch プラグインをインストールする必要があります。これを行うには、次の手順に従ってください。 左側のメニューから、[Administration]を選択し、[Plugins and Data]、そして[Plugins] を選択します。 プラグインのページで、ページ上部のフィールドで「Installed」ではなく「All」を選択します。 OpenSearch プラグインを検索してインストールします。インストールが完了すると、OpenSearch プラグインページの右上に [Add new data source] ボタンが表示されます。クリックしてください。 次に、OpenSearch Serverless インスタンスに接続するためのデータソース情報を設定します。 HTTP セクション:URL フィールドに OpenSearch Serverless インスタンスの URL を入力します。 Auth セクション:SigV4認証をオンにして、OpenSearch Serverless インスタンスが配置されているリージョンを選択します。 OpenSearch Details セクション:Serverless をオンにして、使用するインデックスを設定します。 Logs セクション:メッセージフィールドとレベルフィールドの名前を設定します。 最後に、[Save & test] をクリックします。正常に接続されたことを確認するメッセージが表示されるはずです。これで、このデータソースを使用してアラートとダッシュボードを作成できるようになりました! さいごに 本記事が役に立ち、OpenSearch Serverlessをデータソースとして使用し、ご自身のGrafanaインスタンスをセットアップできるようになっていれば幸いです。 私たちKINTOテクノロジーズにとって、アラートにGrafanaを使用することは、新しいロギングソリューションにとって素晴らしい選択肢のように思えます。この設定により、堅牢で効率的、かつコストパフォーマンスに優れたログとアラートのソリューションを構築でき、私たちの仕様に合致するものとなります。個人的には、Grafanaでのアラートクエリの作成はOpenSearchと比べて、よりシンプルで柔軟だと感じました。 ちなみに、KINTOテクノロジーズのプラットフォームグループでは新しい仲間を募集しています!私たちは、常に優秀なエンジニアを募集しています。チームに参加することに興味がある場合や、私たちの仕事や職場環境について詳しく知りたい場合は、ぜひお気軽にお問い合わせください!私たちの求人一覧が掲載されたウェブページもありますので、ぜひご覧ください。 こちら
アバター
Introduction Hello, everyone. My name is Nakaguchi and I work in the Mobile App Development Group. How did you enjoy iOSDC Japan 2024? This year, it took place in August, and the excitement was higher than ever! It had the energy of a real festival!! This article is for those of you who: Those who attended iOSDC Those who are iOS Engineers Those who love attending conferences I hope you’ll enjoy reading it. Up until last year, KINTO Technologies participation in iOSDC was mostly voluntary. Employees who were interested would attend, and afterwards, they would share their learnings via lightning talks in an internal study session or write a tech blog post. But this year, KINTO Technologies approached iOSDC 2024 with a whole new attitude!! This year, we: "Became an official sponsor," “Submitted several proposals (and one was accepted, amazing!!🎉)," “Held a special event to reflect on iOSDC" . We went in with a packed agenda!! To wrap it all up, I'm writing this blog post! Sponsor’s Story This year, for the first time, KINTO Technologies became an official sponsor of iOSDC🙌!!! Our Tech Blog Team has evolved into a Technical PR Group, and we are putting even more effort into external events! In addition to iOSDC, which we participated in this time, we are also sponsoring DroidKaigi2024 and Developers Summit KANSAI. We’re actively participating and increasingly showing up at large conferences! At iOSDC, our Mobile App Development Group took the lead, receiving great support from our Technical Public Relations Group, the Creative Team, and other departments across the company. For more details, our team members have summarized it in a separate article and presented it at the iOSDC retrospective event, which I’ll discussed later. Do Check it out! [Tech Blog] Our First iOSDC Sponsor Diary Here we will mainly introduces the novelties and other deliverables we created for the event! Please take a look! https://blog.kinto-technologies.com/posts/2024-08-21-iOSDC2024-novelties/ [Tech Blog] KINTO Technologies is a Gold Sponsor of iOSDC Japan 2024 & The Challenge Token is here 🚙 This article includes interviews with our employees. Please also take a look! https://blog.kinto-technologies.com/posts/sponsored-iosdc-japan-2024/ [Presentation Slide] I would like to share our journey to becoming an iOSDC sponsor Here, we introduce how we proceed with our sponsorship in chronological order! If you’re interested in becoming a sponsor at a conference, this post will offer many valuable insights, so please check it out if you are interested!!! https://speakerdeck.com/ktchiroyah/iosdcchu-chu-zhan-matenisitashi-wogong-you-sitai The proposal story This year, for the first time, we held a company-wide proposal writing workshop🙌!!! Team members who were interested in presenting gathered, using these slides as references, and discussed how to write the presentation and what content to include, and came up with the following proposal! https://fortee.jp/iosdc-japan-2024/proposal/7fd624c8-06ec-4dc4-960a-da37f74cf90f https://fortee.jp/iosdc-japan-2024/proposal/a82414cd-54d7-4abb-aa20-e35feb717489 https://fortee.jp/iosdc-japan-2024/proposal/e9e13b6d-0b74-4437-8ec0-ba6598b70ad7 https://fortee.jp/iosdc-japan-2024/proposal/ab0eeedf-0d4f-47a6-8df8-bd792b4d70ca And the following proposal was selected!! Wow! It’s truly amazing! !🎉 https://fortee.jp/iosdc-japan-2024/proposal/25af110e-61d0-4dc8-aba5-3e2e7d192868 https://fortee.jp/iosdc-japan-2024/proposal/c3901357-0782-4fb5-89b8-cb48c473f066 After hearing examples from other companies, I realized that they had meetings to review their proposals, and their number of submissions was on a whole different level. We can’t afford to fall behind! Next year, I want to work even harder! Held a retrospective event for iOSDC Large-scale events like this often come with after-events, and last year, several companies hosted iOSDC retrospective events. And this year, we hosted our own event as well🙌!!! I've written a pretty enthusiastic blog about why I decided to hold the event, what happened leading up to it, what it was like on the day, and more, so please do take a look!!! https://blog.kinto-technologies.com/posts/2024-09-12-after-iosdc/ Below, I’ve summarized the sessions that the members who participated in the iOSDC attended. KINTO Technologies session viewing rankings We had 15 participants (including 4 vendors), and we’ve compiled a ranking of the sessions they watched. This gives you a good idea of the kind of technologies our company is currently interested in!! Tied for 2nd place (6 participants): Learning about typed throws in Swift 6 and the overall picture of error handling in Swift https://fortee.jp/iosdc-japan-2024/proposal/c48577a8-33f1-4169-96a0-9866adc8db8e The speaker explained not only what typed throws are but also compared them with untyped throws, which made it very easy to understand. At first glance, typed throws seemed promising, but I was glad they addressed the official statement that it shouldn’t be used too lightly. It was also insightful to hear the presenter Koher’s perspective. Tied for 2nd place (6 participants): Roundtable Discussion "Strict Concurrency and Swift 6 Open a New Era: How to embrace the new era?" https://fortee.jp/iosdc-japan-2024/proposal/5e7b95a8-9a2e-47d5-87a7-545c46c38b25 We were also researching Strict Concurrency for Swift 6, and this session was a extremely informative. I’d like to move forward with our plans based on what was presented there. Additionally, the roundtable discussion format was refreshing, and it was wonderful to see everyone supporting each other. I hope to see more presentations like this in the future. Tied for 2nd place (6 participants): Shared with Swift Package practices that accelerate development https://fortee.jp/iosdc-japan-2024/proposal/52d755e6-2ba3-4474-82eb-46d845b6772c Since we are developing multiple apps, the concept of a shared Swift Package is very appealing. However, there’s a dilemma because each app has different requirements, making it difficult to find common parts to share . On the other hand, I learned a lot about the steps to create a shared Swift Package, such as team structure and operation methods. Tied for 1st place (7 participants): Rookies LT Tournament https://fortee.jp/iosdc-japan-2024/proposal/95d397a6-f81d-4809-a062-048a447279b3 One of our team members gave a presentation, so we rushed over to cheer and support!! Cheering with penlights was a lot of fun!! The content of the talks was also very interesting, and some of our team members even said, ”I want to try it next year!” Tied for 1st place (7 participants): The Magic of App Clips: A New Era in iOS Design Development https://fortee.jp/iosdc-japan-2024/proposal/66f33ab0-0d73-479a-855b-058e41e1379b At our company, we haven’t yet introduced App Clip in any of our apps, so many team members were eager to try it out. However, some challenges, such as how to distribute App Clip code, are expected to arise. Below are the other sessions with high view counts. Watched by 4 people: A thorough explanation of various "ViewControllers" in iOS/iPadOS and implementation examples Unraveling what defines an iOS app LT Tournament (Second Half) Increased cross-platform adoption. Is iOS development with Swift fading away? An introduction to software development for tackling complexity Watched by 5 people Understanding the data standard for integrating My Number Card on iPhone Unleashing the future of ridesharing with GraphQL and Schema-first development In addition, the average number of sessions watched per person this time was 11.25!!! Bonus This year, we also set up a sponsor booth, and we were curious to know which booths left the biggest impression on attendees, so we conducted a survey! We received responses from 9 people, and here are the results. (Only booths with more than one vote are included.) We tallied the most memorable booths       When you look at the results, you can see that the votes were quite spread out. (I believe the 6 votes for our booth were out of kindness!) When I think about it, I realize how difficult it is to create a booth that appeals to everyone. In the midst of this, DeNA collecting 4 votes is truly impressive. Conclusion As mentioned at the beginning, the entire company was very enthusiastic about this year's iOSDC! Personally, I’m very satisfied with our sponsorship, proposals, and the retrospective event. However, there are still many areas for improvement, and I hope to level up even more and participate in iOSDC next year!! Additionally, just like every year, the sessions were extremely informative, and I’m really glad I participated.
アバター