TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

936

Azure OpenAI Serviceに学ぶ、安全なAI画像編集のヒント はじめに 2025年10月現在、「新しいモデルはどのくらいすごい」「こんなことができる」といったAI関連ニュースが毎日のように流れてきます。 技術の進歩は目覚ましく、今や複雑な画像編集も、AIを使えば比較的簡単に行うことができるようになりました。こうした恩恵を受けられる一方で、忘れてはならないのが「AIをどのように安全に使うか」という視点です。 私たちはつい「どんなすごい画像が作れるか」「どれくらい自然に編集できるか」といった性能面に目を奪われがちですが、その一方で、生成された画像が人を傷つける可能性があるかどうかという観点を持つことも大切です。 企業向けサービスに学ぶ「責任あるAI」 この点で非常に参考になるのが、MicrosoftのAzure OpenAI Service(AOAI)です。 AOAIは企業や行政機関向けに設計されており、OpenAIと同じ技術を用いながらも「責任あるAI(Responsible AI)」の原則に基づき、より厳格な倫理基準(ガードレール)が組み込まれています。 もちろんOpenAIも同様の原則を掲げていますが、AOAIのほうがより慎重に運用されています。 なぜAOAIはより慎重なのか? たとえば、OpenAIのAPIを法人で利用する場合、不正利用を防ぐために管理者 個人の身分証明による本人確認 が求められます。これは、APIを操作する「個人」の責任を明確にするアプローチです。 一方、AOAIは法人向けクラウドサービスであり、Azureの契約と認証基盤(Microsoft Entra IDなど)に基づいて利用が許可されます。企業でサービスを利用する際に従業員一人ひとりが身分証明書を提出するのは、現実的ではありません。AOAIはそこを考慮し、 法人を信頼することで個人認証を不要に しています。いわば契約した「法人」を信頼するアプローチです。 その上でAOAIでは企業向けプラットフォームという立場から、特に慎重な利用制限(ガードレール)を設けています。 AOAIでブロックされる操作例 では、具体的にどのような制限があるのでしょうか。 筆者が実際に検証したところ、AOAIでは以下のような操作がブロックされました。(使用したモデルはgpt-image-1) 未成年者(制服を着ているなど、文脈からそう判断される場合を含む)の画像を編集する指示 侮辱的・差別的な言葉をプロンプトに含めること 特定の個人の顔を入れ替えるような指示 これらの制限からAOAIが重視しているのは、少なくとも次の3点だと考えられます。 AOAIが重視する3つのポイント 社会的に弱い立場にある未成年者の保護 差別や侮辱といった言葉の暴力から人間の尊厳を守ること ディープフェイクなどによる誤情報の拡散防止 これを一言で表すなら、「自由を制限するための壁」ではなく、「他者や社会を守るための枠組み」と言えるのではないでしょうか。 自分が誰かを傷つけないため、そして誤って「加害者」にならないためにも、こうした枠組みは非常に参考になります。 AI時代の透明性:「コンテンツクレデンシャル」 最近は画像生成や編集を個人でも気軽に行えるようになったため、意図せず他人を傷つけてしまうリスクも高まっています。 ただしこのような課題に対応するため、AIで生成・編集された画像には多くの場合「 コンテンツクレデンシャル 」と呼ばれるメタデータが埋め込まれています。 これはAI生成の透明性を担保する新しい仕組みであり、 いつ作られたか どんなツールが使われたか 誰が編集したか といった情報が記録されます。 いわば、 デジタル版の栄養成分表示 のようなものです。 この仕組みにより、コンテンツの改ざんや虚偽の検出が可能になります。 言い換えれば、「正しくAIを使った人」がその証明をしながら堂々と発信できる環境が整いつつあるということです。 まとめ 便利さに流されず、「この表現は誰かを傷つけないか?」と自問すること。 その小さな意識の積み重ねこそが、AIを安全に使うための第一歩であり、結果として自分自身や所属する組織を守ることにもつながるのだと思います。
アバター
This is Nakanishi from the Developer Relations Group (also serving in the FACTORY E-commerce Development Group and QA Group). This article is a report on the InnerSource Gathering Tokyo 2025 held in Odaiba on September 12th. The term "inner source" has become much more common in recent years. At our company, we continue to make small, steady efforts to promote an inner source culture through creating our tech blog and fostering an engineering culture. I learned a lot from this event and it reaffirmed my belief that our efforts are truly contributing to fostering an inner source culture. I’m writing this report to share this wonderful event more widely and to help promote the culture of inner source. Changing Culture Through "Actions": An Field-Based Approach to Implementing Inner Source Throughout the event, a few common themes consistently echoed through the venue. Breaking down silos isn't about slogans, but the accumulation of small actions . Code isn't the only contribution. And rather than loudly proclaiming what is right, starting small and gaining allies is what drives culture change. Each speaker shared insights from their distinct perspectives and different cultural contexts. Capturing "Silos" Through Gradation Right from the start, the message from the organizers was clear. Inner Source is an attempt to break down internal silos through cultural transformation , but it's not something that can be described in black-and-white terms. The intensity varies by organization. With that in mind, the Chatham House Rule was declared, creating a space where participants could freely take away insights without linking them to specific speakers or companies. Once these essential rules were established, the discussions became lively and engaging, which was fantastic It was the moment I felt glad I had participated from the very start of the event. NTT DOCOMO, which provided the venue ( docomo R&D OPEN LAB ODAIBA ), introduced its facility for "creating, learning, and sharing," equipped with giant LED displays and 5G/edge computing environments. The venue is also open as a co-working hub outside of event days, and I was impressed by how it was designed as a permanent place where engineers can naturally gather . Inner Source Is Not a "License" but a "Method" The keynote speech reinterpreted inner source as a "method" by drawing on the history and practices of OSS (open source software). Public discussions, open access for anyone to participate, and community collaboration —it was said that bringing these OSS practices into the company represents a return to the fundamentals of inner source. The discussion also significantly highlighted contributions beyond code , explicitly stating that reviews, testing, triage, translations, documentation, infrastructure operations, and public relations are all first-class contributions. In discussing review language etiquette, they introduced a principle from the networking community, which is "receive with generosity, express with precision" , and connected it to dialogue etiquette. A suggestion is not a command, but the start of a dialogue . Since others can learn by observing such interactions, the language itself helps shape the culture. The other key factor is its compatibility with Agile . Frequent releases, self-organization, evolving requirements—what OSS has long implemented and agile practices ultimately resemble. Therefore, the method of change follows the same path. Actions change, thoughts change, and culture changes . I was reminded that despite the various terms used, such as so-called engineer culture, there is a commonality in the underlying mindset and behavior. Regarding motivation, recent trends were shared, where fun and learning are joined by career growth and reputation . One example highlighted how a company’s design that supports growth and learning led to a rapid increase in internal contributors. In the Q&A, a practical point was raised: incentives that rely solely on money don’t last . The best approach is to design around three elements which are fun, learning, and recognition. As a strategy against burnout, a step-by-step approach was shared. Rather than speaking to a large group from the start, begin by reaching out to individuals, achieving small wins, and gradually building a base of supporters . An answer to the question about handling difficult behavior was down-to-earth: establish a code of conduct and work to raise the community’s overall 'average' . The concept of creating an inclusive framework, turning others allies rather than excluding was consistently present. https://kdmsnr.com/slides/20250912\_innersource/ Many of these align with More Fearless Change: Strategies for Making Your Ideas Happen , and I highly recommend them for driving new initiatives within your organization. NRI "xPalette": Circulating Capabilities Nomura Research Institute shared insights gained over four years of creating environments that empower engineers' creativity and initiative . They establish reference architectures and individual guides, then circulate insights gained through experimentation in a "Learn → Apply → Feedback" cycle. As this cycle turns, opportunities to participate in projects increase, and new ventures combining multiple technologies begin to happen. Explaining activities in terms of business value and creating a positive spiral from budget allocation to environmental improvement is a realistic approach. Management takes the lead in praising young members for trying things out , supporting their “ just give it a go ” mindset with small allocations of time and budget. This kind of hands-on, tangible management approach was evident throughout. Mitsubishi Electric OSPO/ISPO: Turning External Attention into Internal Momentum Mitsubishi Electric has established a system to run OSPO (Open Source Program Office) and ISPO (InnerSource Program Office) in parallel, starting with fostering the habit of " prepare the platform → publish it openly ." Drawing attention at external events and channeling outside interest back into internal recognition . That style of storytelling is characteristic of large corporations. They showcased how a series of internal events helped certain terms evolve into shared language across the organization . Furthermore, they announced plans to host a conference in Yokohama on November 13th , showing a proactive approach to establishing a platform for the movement. InnerSource Summit 2025 https://innersourcecommons.org/ja/events/isc-2025/ Discussion: Regulations Are Also "Made Together" / Quality Is "Value for the User" The discussion was imbued with the practical insights gained from implementing inner source within a large corporation . What struck me most was the proposal that rules, such as development standards and internal regulations, are exactly the kind of things that should be created through inner source . Involve relevant parties and make the approval process transparent. When handling KPIs, do not explain them solely in terms of monetary value . Track the preliminary factors that influence cost , including team size, code reuse, and review lead time. Regarding the definition of quality, the approach emphasized "value for the user" as the core principle, moving away from measuring solely by the number of defects. In response to questions about how to handle generative AI, a calm and grounded answer was given: whether the creator is human or AI is not the core issue . What is needed is quality management that includes user education, operational design, and feedback loops To overcome cost allocation and budget barriers, a practical approach was shared: "Start within your own team first, build a following, and let the system follow later . " KDDI "KAGreement": Where Open Agreements Become Culture KDDI's presentation is an initiative to articulate "Why we are here . " Through weekly FigJam sessions attended by the Vice President and public sharing on Slack , they refine their working agreement (guidelines for action) together . The discussion is following a pattern very similar to inner source practices such as " public sharing, archive, small and fast action, and cross-team ." The design of holding random breakout sessions during company-wide meetings, mixing departments and seniority levels for dialogue, could be described as an implementation that gradually raises the organization's "average . " Behind the scenes, executive sponsorship is certainly providing support. https://www.docswell.com/s/mitsuba\_yu/KLVRX7-2025-09-12-163618 teamLab: If You Can't Find It, It Might as Well Not Exist teamLab's core theme is creating mechanisms that grow through use . The larger an organization grows, the harder it becomes to see what exists and where it’s located. Therefore, they launched the "InnerSource department" internally , starting by visualizing interests and potential collaborators. Next, they created the InnerSource Portal . Consolidate the repository overview, owner, setup instructions, projects using it , and links to Issues/PRs onto a single page . Issue templates are also categorized into types like "questions, improvements, feature requests..." to enhance ease of writing . Furthermore, they talked about plans to implement titles and awards such as " InnerSource Champion ," "Top contributor," "Legendary Issue," and "Rookie of the Year" with a playful spirit. They are also considering initiatives such as regular release sharing sessions and days where everyone collaborates to create a single internal OSS project with a deadline. The objective is singular. Increase the number of situations where it's "right there when you need it . " https://speakerdeck.com/teamlab/innersource\_gathering\_tokyo2025\_teamlab Summary: Small Successes Become a Culture. What resonated most at this year’s ISGT was the recurring theme across all presentations: “gentle pathways . A UI that makes your first PR less intimidating. The words that make the first review comforting. A title that makes you proud of your first contribution. The accumulation of such small successes raises the community's overall "average" and dissolves silos. I strongly felt that inner source is not a system name, but rather the design of daily small actions . Extra: The Vibe at the Social Gathering Following the event, an InnerSource OST (Open Space Technology) session was held, where discussions continued in separate groups for each theme. The event naturally transitioned into a social gathering, so the atmosphere was lively from the start, with an active exchange of opinions continuing throughout. It was striking to see how many meaningful conversations emerged, not just casual exchanges, because the gathering brought together people genuinely committed to tackling the question of how to improve culture.
アバター
はじめに はじめまして、KINTOテクノロジーズ(KTC)でモバイルアプリ(Flutter)の開発を担当しているHand-Tomiです。 アプリをGoogle Play Storeに公開した後、 ストアからインストールしたアプリでのみ Firebase関連のエラーが発生する経験をしたことはありませんか? デバッグビルドや内部テストでは正常に動作していたのに、本番環境でのみ以下のようなエラーログが表示される場合: E Failed to get FIS auth token java.util.concurrent.ExecutionException: ... Caused by: Firebase Installations Service is unavailable. Please try again later. この記事では、このような問題の根本原因と解決方法を段階的に解説します。 問題の症状 デバッグビルド/内部リリース : 正常動作 Play Storeからインストール : Firebase初期化失敗、FCMトークン発行不可 一時的なサーバー障害のように見えますが、実際には リリースビルドのアプリ識別(署名/パッケージ)とFirebase/Google Cloud設定の不一致 が原因です 核心原因の理解 Play Storeの最終署名メカニズム Google Playにアプリを配布すると、最終的なAPK/AABは Google Play App signing key(アプリ署名鍵) で再署名されます。これは開発時に使用する アップロード鍵 とは異なる鍵です。 2箇所への登録が必須 Firebase/Google CloudでAndroidアプリを正しく識別するには、以下の2箇所に Play署名鍵のSHA fingerprint を登録する必要があります: Firebase Console (プロジェクト設定 > Android アプリ): SHA-256 必須 Google Cloud Console (API Key > Android アプリ制限を設定している場合): SHA-1 必須 💡 比喩 : Firebaseは SHA-256パスポート を、API Key制限は SHA-1身分証 を要求します。開発用(デバッグ/アップロード鍵)の身分証だけでは、空港(Playビルド)で通過できません。 解決手順 1. Play署名証明書のSHA取得 Google Play Console にアクセス → 対象アプリを選択 左メニュー: テストとリリース → アプリの完全性 ページを下にスクロールして Play アプリ署名 セクションを見つける 設定を表示 ボタンをクリック アプリ署名鍵の証明書 タブで SHA-1 と SHA-256 をコピー :::message 注意 : 同じ画面に「アップロード鍵証明書」タブも表示されますが、本ガイドで必要なのは アプリ署名鍵の証明書 の値です。SHA-1は Google Cloud Console で、SHA-256は Firebase で使用します。 ::: 2. Firebase ConsoleへのSHA-256登録 Firebase Console → プロジェクト選択 → プロジェクト設定 Your apps でAndroidアプリを選択 SHA certificate fingerprints セクションで Add fingerprint をクリック Play署名鍵のSHA-256 を貼り付けて保存 (推奨) 既存のアップロード鍵やデバッグ鍵のSHA-1/256も登録されているか確認 :::message 設定保存後、数分以内に反映されますが、デバイスキャッシュのため アプリの完全削除 → 再インストール が最も確実です。 ::: 3. Google Cloud ConsoleでのAPI Key制限設定(Android制限を使用している場合) google-services.json の api_key.current_key に対応する API Key にAndroidアプリ制限が設定されている場合は、以下の手順でPlay署名鍵のSHA-1を登録する必要があります。 Google Cloud Console → Firebaseと同じプロジェクトを選択 左メニュー: APIs & Services → 認証情報 リストから該当 API Key を選択 アプリケーションの制限 を確認: 「Android apps」が選択されている場合 : 次のステップに進む 「なし」または他の制限の場合 : このセクションはスキップ可能 パッケージ名 + SHA-1 ペアを追加: パッケージ名: applicationId SHA-1: Play署名鍵のSHA-1 (アップロード鍵ではない) 保存(伝播に数分かかる場合があります) :::message alert この画面では SHA-1のみ入力可能 です。SHA-256入力欄が表示されないのは正常です。 ::: 4. FirebaseOptions検証(推奨) リリースビルドが正しい google-services.json を参照しているか確認します。 ビルド成果物の確認 app/build/generated/res/google-services/<variant>/values/values.xml で以下の値が期待通りか確認: gms_app_id (= mobilesdk_app_id ) project_id gcm_defaultSenderId (= Project number) default_web_client_id / api key 実行時オプションのログ出力 val options = FirebaseApp.getInstance().options Log.d("FB_OPTS", """ appId=${options.applicationId} projectId=${options.projectId} apiKey=${options.apiKey} sender=${options.gcmSenderId} """.trimIndent()) 5. 動作確認 デバイスで アプリデータ削除 または アプリ完全削除 Play Storeから再インストール アプリ起動 → adb logcat で確認: adb logcat | grep -i -E "Firebase|Installation|FCM|AppCheck|Gms" 期待されるログ FirebaseInstallations の FID生成成功 Gms の token retrieval succeeded FirebaseMessaging の**onNewToken(...)**コールバック発生 よくある問題(トラブルシューティング) 1. Play署名値ではなくアップロード鍵を登録 Firebase/Cloud両方で必ず アプリ署名証明書 の値を使用してください。 2. マルチフレーバーでリリースが異なるファイルを使用 app/src/<flavor>/google-services.json の配置ミス apply plugin: "com.google.gms.google-services" がモジュールの最下部にない 3. API Keyが異なる鍵 google-services.json の current_key とCloud Consoleで編集対象のキーが同じか再確認してください。 4. App Check(Play Integrity) Enforce ON 統合前であれば一時的に Enforce OFF → 原因分離後に再度ONにします。 5. R8/ProGuard/Resource Shrink影響 一時的に minifyEnabled false / shrinkResources false でビルドして分離テストします。 6. デバイス/環境問題 Google Play開発者サービスの更新が必要 デバイス時間の自動同期設定 プロキシ/セキュリティアプリによるブロック 最終チェックリスト Play Console → アプリの完全性 → アプリ署名証明書 の SHA-1/256 を取得 Firebase Console → Androidアプリ → SHA-256(必要に応じてSHA-1も)登録 Cloud Console → API Key(Android apps制限設定時) → パッケージ名 + Play署名SHA-1登録 ストアインストール版を 完全削除 → 再インストール 後、FID/FCMトークン発行確認 values.xml および実行時 FirebaseOptions 値の検証 App Check/ProGuard/ネットワーク等の追加要因点検 FAQ Q1. Play Integrity APIの状態が「統合: 未開始」でもSHA登録は必要ですか? はい。状態に関係なく、ストアビルドは Play署名鍵 で署名されます。Firebaseには SHA-256 を必ず登録し、Cloud API KeyにAndroidアプリ制限を設定している場合は SHA-1 も登録する必要があります。 Q2. Cloud Console Android アプリ制限にSHA-256は登録できませんか? いいえ。UI上 SHA-1のみ 入力できます。代わりにFirebase側に SHA-256 を登録してください。 Q3. 登録後も同じエラーが発生する場合は? 以下の順序で点検してください: アプリ完全削除/再インストール リリースビルドの values.xml 確認 App Check Enforce OFFで分離 API Keyマッチング再検証 R8/ProGuard影響確認 まとめ この問題は「 リリースビルドはPlay署名鍵基準 」という点を見落とすとよく発生します。 Firebaseには SHA-256 Cloud API Key(Android制限設定時)には SHA-1 この2つを正しい場所に登録すれば、ほとんどの場合すぐに解決します。 実務では登録後の 再インストール と オプション検証ログ まで一緒にチェックすることで、再発を防ぐことができます。 参考: ローカルkeystoreからのSHA抽出 Play署名鍵はローカルにありませんが、 アップロード鍵/デバッグ鍵 確認用: # keystoreから証明書フィンガープリント表示 (SHA-1/256) keytool -list -v -keystore <your-keystore.jks> -alias <alias-name> # Androidデバッグ鍵 (macOS/Linux) keytool -list -v -keystore ~/.android/debug.keystore \ -alias androiddebugkey -storepass android -keypass android
アバター
KINTOテクノロジーズのAndroidエンジニア 山田 剛 です。 本記事では、Android API level 35以上への対応が必須化された今、既存のAndroidアプリを素早くedge-to-edge対応にするためのノウハウを紹介します。 1. はじめに android { defaultConfig { targetSdk = 35 // ... } // ... } 2025年8月31日以降、Google Playストアで公開されるAndroidアプリは、API level 35以上での公開が必須となりました。 すなわち、 targetSdk を35以上の値にしてビルドしたAndroidアプリでなければ、Google Playストアでの新規アプリの公開、および既存アプリのアップデートが受け付けられなくなりました。 しかし、 targetSdk を35以上の値にしてビルドしたアプリを開くと、画面のステータスバーやシステムナビゲーションバーの領域、およびノッチで隠れた画面の領域が、すべてアプリの表示領域になります。 これに伴い、既存のアプリをedge-to-edge対応にする必要があります。すでに対応済みのアプリも多数でしょうが、開発スケジュールが確保できずまだ対応できていない、という開発者にとってはすでに猶予のない状態です。ほとんどの画面では対応済みなのだが、一部の画面でどうにも変な表示が直らない、という状況もあるでしょう。 この記事では、既存アプリを極力早くedge-to-edge対応にするためのノウハウを紹介します。 本記事のおおまかな内容は以下の通りです: View で構成された画面では、コールバックを活用しましょう。 Composable で構成された画面では、 WindowInsets に関係するさまざまな関数を活用しましょう。 View と Composable が混在するアプリでは、ViewとComposableのそれぞれの対策を組み合わせることによって生じる問題を調整するために用意された関数をうまく使いましょう。 リストの内部パディングなどをうまく使い、できるだけスクロール途中の表示でステータスバー、システムナビゲーションバーの領域を活用できるようにしましょう。 2. Viewで構成された画面のedge-to-edge対応 ![](/assets/blog/authors/tsuyoshi_yamada/2-01_2D_basic-right_KINTO-character.svg =125x) くもびぃ 以下は 画像 が縦に流れていく画面を View で構成した簡単なアプリです: class MainActivity : AppCompatActivity() { private lateinit var binding: ActivityMainBinding private lateinit var imageAdapter: ImageAdapter override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) binding = ActivityMainBinding.inflate(layoutInflater) setContentView(binding.root) withStyledAttributes(TypedValue().data, intArrayOf(android.R.attr.colorPrimary)) { window.statusBarColor = getColor(0, 0) } setSupportActionBar(binding.toolbar) imageAdapter = ImageAdapter(layoutInflater, 2) binding.recyclerView.adapter = imageAdapter binding.fab.setOnClickListener { _ -> imageAdapter.increment() binding.recyclerView.scrollToPosition(imageAdapter.length - 1) } } override fun onCreateOptionsMenu(menu: Menu): Boolean { menuInflater.inflate(R.menu.menu_main, menu) return true } override fun onOptionsItemSelected(item: MenuItem) = when (item.itemId) { R.id.action_settings -> { val editText = layoutInflater.inflate(R.layout.item_edit_text, null) as EditText val dialog = AlertDialog.Builder(this) .setView(editText) .setTitle(R.string.image_count) .setNegativeButton(R.string.cancel) { dialog, _ -> dialog.dismiss() } .show() editText.setOnEditorActionListener { _, actionId, _ -> if (actionId != EditorInfo.IME_ACTION_DONE) return@setOnEditorActionListener false editText.text.toString().toIntOrNull()?.let { imageAdapter.length = it } dialog.dismiss() return@setOnEditorActionListener true } true } else -> super.onOptionsItemSelected(item) } } class ImageAdapter(private val inflater: LayoutInflater, initialLength: Int) : RecyclerView.Adapter<ImageAdapter.ViewHolder>() { var length: Int = initialLength set(value) { val incremental = value - field if (incremental == 0) return field = value if (incremental < 0) { notifyItemRangeRemoved(value, -incremental) } else { notifyItemRangeInserted(value - incremental, incremental) } } override fun getItemViewType(position: Int) = position % IMAGE_LIST.size override fun onCreateViewHolder( parent: ViewGroup, viewType: Int ) = ViewHolder(ItemImageBinding.inflate(inflater, parent, false).apply { image.setImageResource(IMAGE_LIST[viewType]) }) override fun onBindViewHolder(holder: ViewHolder, position: Int) { val bias = (position * 0.3F).rem(2F).let { if (it < 1F) it else 2F - it } holder.binding.spaceStart.let { spaceStart -> (spaceStart.layoutParams as? LinearLayout.LayoutParams)?.let { it.weight = bias spaceStart.layoutParams = it } } holder.binding.spaceEnd.let { spaceEnd -> (spaceEnd.layoutParams as? LinearLayout.LayoutParams)?.let { it.weight = 1F - bias spaceEnd.layoutParams = it } } } override fun getItemCount() = length fun increment() { length = length + 1 } class ViewHolder(val binding: ItemImageBinding) : RecyclerView.ViewHolder(binding.root) companion object { private val IMAGE_LIST = listOf( // ... ) } } レイアウトファイルは以下の通りです: <?xml version="1.0" encoding="utf-8"?> <androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <com.google.android.material.appbar.AppBarLayout android:id="@+id/layout_appbar" android:layout_width="0dp" android:layout_height="wrap_content" android:background="?colorPrimary" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent"> <com.google.android.material.appbar.MaterialToolbar android:id="@+id/toolbar" style="@style/Widget.MaterialComponents.Toolbar.Primary" android:layout_width="match_parent" android:layout_height="?attr/actionBarSize" /> </com.google.android.material.appbar.AppBarLayout> <androidx.recyclerview.widget.RecyclerView android:id="@+id/recycler_view" android:layout_width="0dp" android:layout_height="0dp" android:background="?colorBackgroundFloating" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toBottomOf="@id/layout_appbar" app:layoutManager="androidx.recyclerview.widget.LinearLayoutManager" android:orientation="vertical" /> <com.google.android.material.floatingactionbutton.FloatingActionButton android:id="@+id/fab" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="bottom|end" app:shapeAppearanceOverlay="@style/ShapeAppearance.App.Circle" app:backgroundTint="?colorSecondary" app:srcCompat="@android:drawable/ic_input_add" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintEnd_toEndOf="parent" /> </androidx.constraintlayout.widget.ConstraintLayout> これを targetSdk を34に設定してビルドした場合と35に設定してビルドした場合とで、 Android OS 10 以上で実行した画面を比較します: targetSdk = 34 targetSdk = 35 (対策なし) targetSdk が34なら何も問題なく表示されますが、35だとステータスバーがタイトルバーに吸収されて見づらくなり、タイトル自体もカットアウト(カメラなどのセンサーをディスプレイ上に収めた部分の切り欠き)の穴が空き、システムナビゲーションバーの領域もアプリの表示領域に含まれてしまいます。 targetSdk = 35 における エッジ ツー エッジの適用 とは、アプリの表示領域を否応なしにステータスバーやシステムナビゲーションバー、カットアウトの領域にまで拡げてしまうということなのです。 さすがにこの状態のアプリをアプリストアなどにリリースするのはつらいものがあります。もう時間がありませんが、何とかしなければなりません。 2.1. ViewCompat.setOnApplyWindowInsetsListener ビューでコンテンツをエッジ ツー エッジで表示する にて、Viewをedge-to-edgeに対応させるさまざまな方法が紹介されています。 そのうち、汎用性の高い手段として ViewCompat.setOnApplyWindowInsetsListener(View, OnApplyWindowInsetsListener) を使用する方法があります: override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) binding = ActivityMainBinding.inflate(layoutInflater) setContentView(binding.root) // window.statusBarColor = getColor(0, 0) // edge-to-edgeではstatusBarColorの設定は不要(無効) setSupportActionBar(binding.toolbar) imageAdapter = ImageAdapter(layoutInflater, 2) binding.recyclerView.adapter = imageAdapter binding.fab.setOnClickListener { _ -> imageAdapter.increment() binding.recyclerView.scrollToPosition(imageAdapter.length - 1) } applyWindowInsetsForE2E() } private fun applyWindowInsetsForE2E() { ViewCompat.setOnApplyWindowInsetsListener(binding.layoutAppbar) { v, windowInsets -> val insets = windowInsets.getInsets( WindowInsetsCompat.Type.systemBars() or WindowInsetsCompat.Type.displayCutout() ) v.updatePadding(left = insets.left, top = insets.top, right = insets.right) WindowInsetsCompat.CONSUMED } ViewCompat.setOnApplyWindowInsetsListener(binding.recyclerView) { v, windowInsets -> val insets = windowInsets.getInsets( WindowInsetsCompat.Type.systemBars() or WindowInsetsCompat.Type.displayCutout() ) v.updatePadding(left = insets.left, right = insets.right, bottom = insets.bottom) WindowInsetsCompat.CONSUMED } ViewCompat.setOnApplyWindowInsetsListener(binding.fab) { v, windowInsets -> val insets = windowInsets.getInsets( WindowInsetsCompat.Type.systemBars() or WindowInsetsCompat.Type.displayCutout() or WindowInsetsCompat.Type.ime() ) v.updateLayoutParams<ViewGroup.MarginLayoutParams> { val margin = resources.getDimensionPixelOffset(R.dimen.fab_margin) bottomMargin = insets.bottom + margin rightMargin = insets.right + margin } WindowInsetsCompat.CONSUMED } } listener の中で WindowInsets$getInsets(Int) に WindowInsetsCompat.Type から適切な関数を組み合わせて引数に与え、インセット(上、左、右、下の空間)の値を得ます。 MainActivity$applyWindowInsetsForE2E() の中で、 AppBarLayout, RecyclerView, FloatingActionButton のそれぞれに対して ViewCompat.setOnApplyWindowInsetsListener(...) を呼び出して設定している点に注意が必要です。 AppBarLayout ではステータスバーとカットアウトの領域分だけ上部のパディングが必要ですが、RecyclerView では上部を気にする必要がないので top のパディングは設定していません。 FloatingActionButton では、end と bottom のマージンを加算して LayoutParams のマージンを変更しています。 他方、RecyclerView では、以下のように android:clipToPadding="false" を追加して、 RecyclerView の表示領域にパディングをするのではなく、 RecyclerView のコンテンツ全体の左右と下部へのパディングとすることによって、下部のシステムナビゲーションバーの部分も表示領域として活用しつつ、スクロールの下端ではコンテンツとシステムナビゲーションが重複しなくて済むようにできます: <androidx.recyclerview.widget.RecyclerView android:id="@+id/recycler_view" android:layout_width="0dp" android:layout_height="0dp" android:background="?colorBackgroundFloating" android:clipToPadding="false" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toBottomOf="@id/layout_appbar" app:layoutManager="androidx.recyclerview.widget.LinearLayoutManager" android:orientation="vertical" /> edge-to-edge対応で求められているのは、画面の端から端までを表示に活用しつつ、アプリUIがステータスバーやシステムナビゲーションバーの邪魔にならないように、カットアウトがアプリUIの邪魔にならないように両立させることです。 対策の時間が限られている状況では、ひとまずインセットの部分を背景色のみで潰しておく、という対応で時を稼がざるを得ないこともあるでしょう。それでも、できる限りはedge-to-edgeの長所を活かすように対応したいものです。 targetSdk = 35 (対策あり、スクロール上端) targetSdk = 35 (対策あり、スクロール下端) listener の末尾で WindowInsetsCompat.CONSUMED を返すことで、各Viewのパディングもしくはマージンとして領域を 消費する ことをAndroidのシステムに伝えています。 つまり、AppBarLayout の上部のパディング部分は AppBarLayout に属しているので、パディング部分も含めてXMLレイアウトファイルの android:background="?colorPrimary" という記述によって背景色が設定されます。 この性質によって、edge-to-edgeでは「ステータスバーの背景色」という概念がなくなっているので、API level 35以上では window.statusBarColor ( Window$setStatusBarColor(Int) ) は何も効果がない非推奨の関数に変わっています。 システムナビゲーションバーの高さは3ボタンナビゲーション(下の左画像)か、ジェスチャーナビゲーション(下の右画像)かで変化します。この変化にアプリUIが追随できるように実装する必要があります。 ViewCompat.setOnApplyWindowInsetsListener(...) を活用する方法の優れた点は、記述が非常にわかりやすく、かつアプリやAndroid端末の事情が絡み合う複雑な状況に対応しやすいことです。 MainActivity$applyWindowInsetsForE2E() は、3つのViewに対してそれぞれに必要なパディングの値を WindowInsetsCompat$getInsets(Int) で取得し、その値をパディングもしくはマージンとして設定しています。 そのうち FloatingActionButton のマージンの値の取得時に WindowInsetsCompat.Type.ime() のフラグの指定を含めています。 これで、下の右画像のように、ソフトウェアキーボードなどが表示されているときにその高さの分だけ FloatingActionButton が持ち上がるように実装できます(ボタンをこのように持ち上げる必要があることは多くはないでしょうが、例として挙げてみました)。 このようにアプリの状態変化に追随させるためには、画面の初期設定が終わった後もリスナーのunsetは行わずにに状態を監視させ続ける必要があります。 targetSdk = 35 (対策あり、音声入力UI表示時) targetSdk = 35 (対策あり、ジェスチャーナビゲーション) 画面の回転を可能にしているアプリでは、 ViewCompat.setOnApplyWindowInsetsListener(...) はさらに強力です。 画面が回転するアプリだと、カットアウトは回転ごとに常に移動します。回転なしポートレイト(縦長)のときは上端でステータスバーと重なるので上端ではステータスバーとカットアウトのどちらか高い方のパディングを設定すればよいですが、それ以外の状態ではステータスバーとカットアウトの両方のパディングが必要になります。 システムナビゲーションバーについては、ボタンナビゲーションの場合はランドスケープ(横長)のとき左端または右端に移動、ジェスチャーナビゲーションの場合は常にシステムナビゲーションバーが下端。またタイトルバーは常に上端…と、多数の複雑な組み合わせが存在します。 このときも ViewCompat.setOnApplyWindowInsetsListener(...) は常に妥当なパディング・マージンの値を提供します。 targetSdk = 35 (対策あり、左90°回転ランドスケープ、ボタンナビゲーション) targetSdk = 35 (対策あり、右90°回転ランドスケープ、ジェスチャーナビゲーション) 2.2. ItemDecoration などを使う Viewの構成によっては、多くのViewに新たにコールバックを追加しにくいようなケースもあるかもしれません。そのような場合には、1箇所で取得したインセットの値を共有するような方法でもよいでしょう。アプリがポートレイトのみ対応で画面回転を考慮しなくてよい場合など、インセットの動的な変化があまり起こらないアプリでは、できるだけ静的に設定する処理にするのがエンバグの危険が少ないとも考えられます。Activityで取得したインセットの値をFragmentと共有するようにしてもよいでしょう。 private var insetBottom = 0 override fun onCreate(savedInstanceState: Bundle?) { // ... ViewCompat.setOnApplyWindowInsetsListener(binding.layoutAppbar) { v, windowInsets -> // アプリがポートレイトのみ対応で画面回転を考慮しなくてよい場合 val insets = windowInsets.getInsets( WindowInsetsCompat.Type.systemBars() or WindowInsetsCompat.Type.displayCutout() ) insetBottom = insets.bottom binding.recyclerView.addItemDecoration(createListBottomSpacingItemDecoration(insetBottom)) binding.fab.updateLayoutParams<ViewGroup.MarginLayoutParams> { val margin = resources.getDimensionPixelOffset(R.dimen.fab_margin) bottomMargin = insets.bottom + margin rightMargin = insets.right + margin } v.updatePadding(left = insets.left, top = insets.top, right = insets.right) WindowInsetsCompat.CONSUMED } } たとえば、 RecyclerView の場合は RecyclerView$addItemDecoration(RecyclerView.ItemDecoration) を使った以下のような関数で RecyclerView のスクロール下端にedge-to-edge対応のパディングを設定することができます。ItemDecorationは主に区切り線を設定するために使われますが、このように決まった位置に空白を設定するだけの目的にも使えます。 アプリの構成によっては、 android:clipToPadding="false" を使ってパディングを設定するよりも少ない変更で済みそうです: fun createListBottomSpacingItemDecoration(insetBottom: Int) = object : RecyclerView.ItemDecoration() { override fun getItemOffsets(outRect: Rect, view: View, parent: RecyclerView, state: RecyclerView.State) { outRect.set(0, 0, 0, if (parent.getChildAdapterPosition(view) < state.itemCount - 1) 0 else insetBottom) } } 3. Composableで構成された画面のedge-to-edge対応 Jetpack Compose 1.0 が公開されて4年経過した今、既存のプロジェクトではViewからComposableへの移行がかなり進んだ、もしくは最近立ち上がったプロジェクトでは最初から大半のUIをComposableで構成している、という開発プロジェクトも多いでしょう。 先ほどのViewベースのアプリと同様のComposeベースのアプリは、以下のような記述になるでしょう。なお本記事では、compose-bom 2024.06.00以降、Material3を用いるものとします: class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContent { E2ESampleTheme { val showsDialog = remember { mutableStateOf(false) } val isFabClicked = remember { mutableStateOf(false) } Scaffold( topBar = { Row( modifier = Modifier .fillMaxWidth() .heightIn(64.dp) .background(MaterialTheme.colorScheme.primary), verticalAlignment = Alignment.CenterVertically ) { Text( stringResource(R.string.app_name), modifier = Modifier .padding(start = 16.dp) .weight(1F, true), fontSize = 20.sp, fontWeight = FontWeight.W600, maxLines = 1, overflow = TextOverflow.Ellipsis ) TextButton( modifier = Modifier.padding(end = 8.dp), onClick = { showsDialog.value = true }, colors = ButtonDefaults.buttonColors(contentColor = MaterialTheme.colorScheme.onPrimary) ) { Text( stringResource(R.string.image_count), fontSize = 16.sp, fontWeight = FontWeight.W600 ) } } }, floatingActionButton = { FloatingActionButton( modifier = Modifier .padding(dimensionResource(R.dimen.fab_margin)), shape = CircleShape, containerColor = MaterialTheme.colorScheme.secondary, contentColor = MaterialTheme.colorScheme.onSecondary, onClick = { isFabClicked.value = true } ) { Icon(Icons.Filled.Add, "One more") } }, content = { innerPadding -> ImageColumn(Modifier.padding(innerPadding), showsDialog, isFabClicked) } ) } } withStyledAttributes(TypedValue().data, intArrayOf(android.R.attr.colorPrimary)) { window.statusBarColor = getColor(0, 0) } } } @DrawableRes private val IMAGE_LIST = listOf( // ... ) @Composable fun ImageColumn(modifier: Modifier = Modifier, showsDialog: MutableState<Boolean>, isFabClicked: MutableState<Boolean>) { var length by remember { mutableIntStateOf(2) } Box(modifier = modifier.fillMaxSize()) { val lazyListState = rememberLazyListState() LazyColumn(state = lazyListState, modifier = Modifier.fillMaxSize()) { items(length, key = { it }) { index -> Box(Modifier.fillParentMaxWidth()) { val bias = (index * 0.6F).rem(4F).let { if (it < 2F) it - 1F else (4F - it) - 1F } Image( modifier = Modifier .padding(8.dp) .align(BiasAlignment(horizontalBias = bias, verticalBias = 0F)) .animateItemPlacement(), // If compose 1.8.0 or upper, use .animateItem() painter = painterResource(IMAGE_LIST[index % 4]), contentDescription = null ) } } } val coroutineScope = rememberCoroutineScope() LaunchedEffect(isFabClicked.value) { if (isFabClicked.value) { length += 1 isFabClicked.value = false coroutineScope.launch { lazyListState.animateScrollToItem(length - 1) } } } if (showsDialog.value) { var numberText by remember { mutableStateOf("") } AlertDialog( onDismissRequest = { showsDialog.value = false }, title = { Text(stringResource(R.string.image_count)) }, confirmButton = {}, dismissButton = { TextButton(onClick = { showsDialog.value = false }) { Text(stringResource(R.string.cancel)) } }, text = { TextField( numberText, { text -> numberText = text.filter { it.isDigit() } }, keyboardOptions = KeyboardOptions(keyboardType = KeyboardType.Number), keyboardActions = KeyboardActions { length = numberText.toIntOrNull() ?: 0 showsDialog.value = false }, singleLine = true ) } ) } } } Viewベースの場合と同様、 targetSdk を34に設定してビルドした場合と35に設定してビルドした場合で比較します。 やはりタイトルバーでは問題が起きていますが、Viewのときとは異なり FloatingActionButton はシステムナビゲーションバーとは重なっていません。これは Scaffold(...) の content パラメータに設定している関数オブジェクトの引数 innerPadding を参照してパディングを設定しているためです。 また、 Scaffold(...) の topBar パラメータに設定している関数オブジェクトの中で TopAppBar(...) を使えば、ステータスバーの領域にパディングが自動的に設定されます。 Viewベースでの開発と比べてJetpack Composeの各関数では、このようにedge-to-edgeへの対応が考慮されている部分が多く、 宣言的UI の概念によってより直感的な記述によるedge-to-edge対応が可能になっています。 もっとも、現時点では TopAppBar(...) を使うには @ExperimentalMaterial3ExpressiveApi が必要です。ここでは、experimental API がプロジェクトの制約等で使えず、またシステムナビゲーションバーの部分に縦スクロールのコンテンツを表示させる必要がある、という場合の実装を考えてみましょう。 targetSdk = 34 targetSdk = 35 (対策なし) 3.1. WindowInsets のプロパティ (androidx.compose.foundation.layout.) WindowInsets のプロパティおよび関数を使って、ステータスバーやシステムナビゲーションバー、カットアウトの領域のインセットを取得できます。 WindowInsets.asPaddingValues() を使ってインセットをパディングの値に変換し、 Modifier などに適用することで、Composableとエッジとの間に適切な距離をとらせることができます: override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContent { E2ESampleTheme { val showsDialog = remember { mutableStateOf(false) } val isFabClicked = remember { mutableStateOf(false) } val safeDrawingInsets = WindowInsets.safeDrawing.asPaddingValues() val direction = LocalLayoutDirection.current Scaffold( topBar = { Row( modifier = Modifier .background(MaterialTheme.colorScheme.primary) .padding( start = safeDrawingInsets.calculateStartPadding(direction), top = safeDrawingInsets.calculateTopPadding(), end = safeDrawingInsets.calculateEndPadding(direction) ) .fillMaxWidth() .heightIn(64.dp), verticalAlignment = Alignment.CenterVertically ) { // この部分は不変 ... } }, floatingActionButton = { FloatingActionButton( modifier = Modifier.padding(end = safeDrawingInsets.calculateEndPadding(direction)), shape = CircleShape, containerColor = MaterialTheme.colorScheme.secondary, contentColor = MaterialTheme.colorScheme.onSecondary, onClick = { isFabClicked.value = true } ) { Icon(Icons.Filled.Add, "One more") } }, content = { innerPadding -> ImageColumn( Modifier.padding(top = innerPadding.calculateTopPadding()), showsDialog, isFabClicked ) } ) } } // window.statusBarColor = getColor(0, 0) // edge-to-edgeではstatusBarColorの設定は不要(無効) } @Composable fun ImageColumn(modifier: Modifier = Modifier, showsDialog: MutableState<Boolean>, isFabClicked: MutableState<Boolean>) { var length by remember { mutableIntStateOf(2) } val direction = LocalLayoutDirection.current val navigationBars = WindowInsets.navigationBars.asPaddingValues() val verticalBars = WindowInsets.displayCutout.union(navigationBars).asPaddingValues() Box(modifier = modifier .padding( start = verticalBars.calculateStartPadding(direction), end = verticalBars.calculateEndPadding(direction) ) .fillMaxSize() ) { val lazyListState = rememberLazyListState() val bottomPadding = navigationBars.calculateBottomPadding() LazyColumn( state = lazyListState, modifier = Modifier.fillMaxSize(), contentPadding = PaddingValues(bottom = bottomPadding) ) { items(length, key = { it }) { index -> Box(Modifier.fillParentMaxWidth()) { val bias = (index * 0.6F).rem(4F).let { if (it < 2F) it - 1F else (4F - it) - 1F } Image( modifier = Modifier .padding(8.dp) .align(BiasAlignment(horizontalBias = bias, verticalBias = 0F)) .animateItemPlacement(), // If compose 1.8.0 or upper, use .animateItem() painter = painterResource(IMAGE_LIST[index % 4]), contentDescription = null ) } } } // ... } } 各Composableの Modifier に対して Modifier.windowInsetsPadding(WindowInsets) を使えればもう少し簡潔に書けますが、画面の回転に応じて上、左右、下のそれぞれで異なる処理を行わせるため、細かく設定を計算しています。 要領は WindowInsetsCompat の使い方とよく似ています。 WindowInsets の方が少し簡略化されて直感的にわかりやすくなっています。 WindowInsets.Companion.statusBars でステータスバー、 WindowInsets.Companion.navigationBars でシステムナビゲーションバーのインセットを表します。 また、複数のインセットを組み合わせたインセットを考えることもできます。 WindowInsets.Companion.systemBars は WindowInsets.statusBars.union(WindowInsets.captionBar).union(WindowInsets.navigationBars) と同じ、 WindowInsets.Companion.safeDrawing は WindowInsets.systemBars.union(WindowInsets.ime).union(WindowInsets.displayCutout) と同じです。 ここで WindowInsets.union(WindowInsets) は上、左、右、下のそれぞれで各インセットの最大値をとる関数です。たとえば回転なしポートレイトのときは上端にはステータスバーとカットアウトがあり、両者のうち高い方の高さを上端のインセットとする、という演算を行います(高さの最大値の距離をとることでステータスバーとカットアウトの両方を避けられるため)。 インセットの値を得て、 Scaffold(...) の topBar と floatingActionButton 、 LazyColumn(...) のそれぞれに適切なパディングの値を与えます。Viewベースの場合にはコールバックという手続き的なコードでパディングやマージンを設定しましたが、 宣言的UI 概念のComposeではパラメータのように与えることができ、より直感的にわかりやすくなっているように思います。 floatingActionButton に対しては、ランドスケープ時の対策としてインセットの end の値のみにパディングを設定しています。 LazyColumn(...) では、Viewベースの RecyclerView にて android:clipToPadding="false" を設定したのと同様にスクロールするコンテンツの全体に対して下端のパディングを与えるため、 Modifier.padding(Dp) ではなく contentPadding パラメータに safeDrawingInsets.calculateBottomPadding() を与えています。 targetSdk = 35 (対策あり、ボタンナビゲーション) targetSdk = 35 (対策あり、ジェスチャーナビゲーション) 3.2. レイアウトの内側にパディングを設定する 2.2. のItemDecorationのように、LazyLayoutの内側に空白を設定することでedge-to-edgeにすることもできます: fun ImageColumn(modifier: Modifier = Modifier, showsDialog: MutableState<Boolean>, isFabClicked: MutableState<Boolean>) { var length by remember { mutableIntStateOf(2) } val direction = LocalLayoutDirection.current val navigationBars = WindowInsets.navigationBars.asPaddingValues() val verticalBars = WindowInsets.displayCutout.union(WindowInsets.navigationBars).asPaddingValues() Box( // ... ) { val lazyListState = rememberLazyListState() val bottomPadding = navigationBars.calculateBottomPadding() LazyColumn( state = lazyListState, modifier = Modifier.fillMaxSize() ) { items(length, key = { it }) { index -> // ... } item { Spacer(modifier = Modifier.height(bottomPadding)) // <- } } // ... } // ... } LazyColumn(...) の末尾に item { Spacer(...) } を追加するだけです。仮に、タイトルバーがなくステータスバーとカットアウトの分のパディングが必要であれば、 LazyColumn(...) の先頭に同様の item を追加すればよいでしょう。 contentPadding よりも 宣言的 でわかりやすいと言えるかもしれません。 このように、レイアウトの内側にパディングを設定することも有力な手段です。Jetpack Compose には、このような柔軟な対応がしやすいという利点があります。 4. ViewとComposableが混在する画面のedge-to-edge対応 Jetpack Compose の公開後、ViewベースのアプリのComposable化を鋭意進めているが、残っているViewのすべてはすぐにはComposable化できない、というプロジェクトも多いでしょう。実際、WebViewなど、pure composable なソリューションがまだ存在しないUI部品も残っており ^1 、Viewとの完全なお別れはまだ非現実的、というプロジェクトが多いことと推察いたします。 そんなアプリでは ComposeView などを使ってViewとComposableを混在させて画面を構成していると思われますが、これに対して上記で紹介したようなテクニックを使う際、状況が複雑化し、思わぬ問題が起きることがあります。 class MainActivity : AppCompatActivity() { private lateinit var imageAdapter: ImageAdapter val showsDialog = mutableStateOf(false) override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) findViewById<ComposeView>(R.id.compose_view).apply { setContent { setViewCompositionStrategy(ViewCompositionStrategy.DisposeOnViewTreeLifecycleDestroyed) E2ESampleTheme { ImageColumnScaffold(showsDialog) } } } withStyledAttributes(TypedValue().data, intArrayOf(android.R.attr.colorPrimary)) { window.statusBarColor = getColor(0, 0) } setSupportActionBar(findViewById(R.id.toolbar)) } override fun onCreateOptionsMenu(menu: Menu): Boolean { menuInflater.inflate(R.menu.menu_main, menu) return true } override fun onOptionsItemSelected(item: MenuItem) = when (item.itemId) { R.id.action_settings -> { showsDialog.value = true true } else -> super.onOptionsItemSelected(item) } } @Composable fun ImageColumnScaffold(showsDialog: MutableState<Boolean>) { val isFabClicked = remember { mutableStateOf(false) } Scaffold( floatingActionButton = { FloatingActionButton( shape = CircleShape, containerColor = MaterialTheme.colorScheme.secondary, contentColor = MaterialTheme.colorScheme.onSecondary, onClick = { isFabClicked.value = true } ) { Icon(Icons.Filled.Add, "One more") } }, content = { innerPadding -> ImageColumn(Modifier.padding(innerPadding), showsDialog, isFabClicked) } ) } レイアウトXMLファイルには ComposeView が含まれます: <?xml version="1.0" encoding="utf-8"?> <androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:clipToPadding="false" tools:context=".MainActivity"> <com.google.android.material.appbar.AppBarLayout android:id="@+id/layout_appbar" android:layout_width="0dp" android:layout_height="wrap_content" android:background="?colorPrimary" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent"> <com.google.android.material.appbar.MaterialToolbar android:id="@+id/toolbar" style="@style/Widget.MaterialComponents.Toolbar.Primary" android:layout_width="match_parent" android:layout_height="?attr/actionBarSize" /> </com.google.android.material.appbar.AppBarLayout> <androidx.compose.ui.platform.ComposeView android:id="@+id/compose_view" android:layout_width="0dp" android:layout_height="0dp" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toBottomOf="@id/layout_appbar" /> </androidx.constraintlayout.widget.ConstraintLayout> ImageColumn.kt は 3. と同じものを使います。 スクロール部分やフローティングアクションボタンはComposeで実装しているが、タイトルバーとメニューの処理は AppCompatActivity$setSupportActionBar(Toolbar) を使っている、という例です。この例は該当しませんが、たとえばFragmentで多くの画面を作っているアプリでは、このような構成になることが多そうですね。 このアプリで targetSdk を34→35としてビルドすると: targetSdk = 34 targetSdk = 35 (対策なし) ステータスバー・カットアウトとタイトルが重なるのは承知の上ですが、それに加えてタイトルバーとスクロール部分の間に奇妙な隙間が生じています。 これは Scaffold(...) の content に渡している innerPadding がステータスバーとシステムナビゲーションバーの領域のパディングの値を持っているために起こっています。 この画面では、ステータスバー・カットアウトの領域を確保するのは AppBarLayout の責務、すなわちViewの責務であり、Composableは余計なことをしてはいけません。この画面に限って対策するなら Scaffold(...) の content に渡すパディングを 0dp にすればよいのですが、 Scaffold(...) を使うComposableが大きな関数で、ある画面では純粋なComposableからなる画面から呼び出され、ある画面では ComposeView を含むViewベースの画面から呼び出され…というように多くの画面で共通に使われていて、、多岐にわたる処理を引き受けていたりすると、修正が大変です。 こういった場合に対処するため、Jetpack Composeには便利な関数が用意されています。 4.1. Modifier.consumeWindowInsets(WindowInsets) 以下のコードで解決します: class MainActivity : AppCompatActivity() { val showsDialog = mutableStateOf(false) override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) findViewById<ComposeView>(R.id.compose_view).apply { setContent { setViewCompositionStrategy(ViewCompositionStrategy.DisposeOnViewTreeLifecycleDestroyed) E2ESampleTheme { ImageColumnScaffold( Modifier.consumeWindowInsets(WindowInsets.systemBars), // <- showsDialog ) } } } // window.statusBarColor = getColor(0, 0) // edge-to-edgeではstatusBarColorの設定は不要(無効) setSupportActionBar(findViewById(R.id.toolbar)) ViewCompat.setOnApplyWindowInsetsListener(findViewById(R.id.layout_appbar)) { v, windowInsets -> val insets = windowInsets.getInsets( WindowInsetsCompat.Type.systemBars() or WindowInsetsCompat.Type.displayCutout() ) v.updatePadding( left = insets.left, top = insets.top, right = insets.right, ) WindowInsetsCompat.CONSUMED } } // ... } @Composable fun ImageColumnScaffold(modifier: Modifier = Modifier, showsDialog: MutableState<Boolean>) { val isFabClicked = remember { mutableStateOf(false) } Scaffold( modifier, floatingActionButton = { FloatingActionButton( modifier = Modifier.padding(WindowInsets.safeDrawing.asPaddingValues()), shape = CircleShape, containerColor = MaterialTheme.colorScheme.secondary, contentColor = MaterialTheme.colorScheme.onSecondary, onClick = { isFabClicked.value = true } ) { Icon(Icons.Filled.Add, "One more") } }, content = { innerPadding -> ImageColumn(Modifier.padding(innerPadding), showsDialog, isFabClicked) } ) } ステータスバーの問題は、Viewベースの方法 ViewCompat.setOnApplyWindowInsetsListener で解決します。 隙間の問題は、 Scaffold(...) の modifier 引数に Modifier.consumeWindowInsets(WindowInsets) を追加したものを渡すことで解決します。これは、Composableが systemBars の領域、すなわちステータスバーとシステムナビゲーションバーの領域を 消費 することを宣言するものです。 これで隙間だった部分にも Scaffold(...) のコンテンツが表示されて正常な表示に戻りますが、そのかわりに FloatingActionButton(...) のようなUIも systemBars の領域に入り込むようになるため、 FloatingActionButton(...) の modifier に Modifier.padding(WindowInsets.safeDrawing.asPaddingValues()) を追加して、システムナビゲーションバーやカットアウトの領域を避けるように明示する必要が生じます。 ImageColumn(...) には 3.1. と同じものが使えます。このときと同じく、画面の左右の障害物を避けつつ、下端には LazyColumn(...) の contentPadding を設定し、スクロールコンテンツの表示をシステムナビゲーションバーなどの領域にも許す一方、スクロールの下端がシステムナビゲーションバーなどの領域に重ならないようにしています。 5. まとめ Android 15 対応が必須となった今、次のアプリリリース・アプリアップデートの日までに大急ぎでedge-to-edgeに対応しなければならない、という開発者の皆さんにも助けになることを願って、Viewベース、Composableベース、そしてViewとComposableが混在する場合のedge-to-edge対応の方法を紹介しました。非常に単純な画面例をサンプルにしていますが、それでも状況によってはかなり複雑な思考を要する場合があります。開発プロジェクトによっては、ほとんどの問題は解決しているが一部の画面がまだ解決していない、ということもあるでしょう。 edge-to-edge対応にはさまざまなツールが提供されています。ここで紹介したテクニックのうち試していないものがあれば、手を替え品を替えて試してみてください。また筆者自身、上記のような簡単なサンプルでもさまざまな対策を組み合わせて試すことで多くの学びがありました。何らかの形で本記事が皆さんの助けになれば幸いです。 6. 参考文献 Android API reference Behavior changes: Apps targeting Android 15 or higher Display content edge-to-edge in views Insets handling tips for Android 15’s edge-to-edge enforcement [Android] Modifier.consumeWindowInsetsとModifier.windowInsetsPaddingの動作まとめ|kaleidot.net
アバター
OpenAI vs Google 画像編集対決 gpt-image-1 と Gemini 2.5 Flash Image の"一貫性"を検証してみた 近年、OpenAIの ChatGPT、Googleの Gemini、Anthropicの Claude が生成AIの主要プレイヤーとして存在感を高めていますが、このうち画像の生成・編集を提供しているのは OpenAI と Googleの2社です。 本記事では、OpenAIの gpt-image-1(2025年7月時点)と、GoogleのGemini 2.5 Flash Image(通称 Nano Banana、Web UI 2025年8月時点)に焦点を当て、画像の 一貫性と日本語テキストの扱いを中心に、実際の出力例を交えて比較します。 1. gpt-image-1、Flash Image とは何か gpt-image-1(OpenAI) ChatGPTの「4o ImageGeneration」の基盤モデル。強力な生成・編集能力を持ち、インペインティング(マスク編集)**に対応。 なお、フル機能はAPI経由での利用が前提です。 Gemini 2.5 Flash Image(Google) 高速・軽量な画像生成機能で、参照画像を用いた生成に対応。無料ユーザーでも使える点が特徴です(愛称 Nano Banana)。 2. 画像生成AIが抱える弱点:一貫性 AIで画像を繰り返し生成・編集すると、 「元の見た目から少しずつズレていく」問題 が発生しがちです。 いわゆる「一貫性の欠如」で、人物の顔・体型・衣服の質感、背景の構造などが回数を重ねるほど変化してしまいます。 Flash Image(2025年8月)はこの点が比較的安定しており、登場直後からSNS上でも話題に。 gpt-image-1も(2025年7月)に導入されたパラメータinput_fidelityにより、編集時の一貫性が向上しました。 3. アウトプット比較①:人物のポーズ編集 お題:車の後部座席の画像に、家族写真の人物たちを自然に座らせる。 3-1. gpt-image-1 gpt-image-1は複数画像の直接参照(A画像にB画像の人物を配置)ができないため、事前に簡易合成(下処理)を実施。 家族写真を雑に切り抜いて座席画像の上に重ねました。もちろん input_fidelity = high を設定しています。 ![雑な合成画像](/assets/blog/authors/aoshima/image_edit/03.jpg =50%x) 使用したプロンプト Make this family photo look natural and realistic: - Fix lighting to match car interior lighting - Add natural shadows under people - Adjust color temperature to match - Make people look naturally seated - Blend edges smoothly - Keep faces unchanged but make them fit the scene - Add subtle reflections on windows if visible 出力例 ![gpt-image-1の出力画像01](/assets/blog/authors/aoshima/image_edit/04.jpg =50%x) 所感:服のディテールや内装の細部に差異はあるものの、座り姿勢や影の収まりも一度の生成で十分自然に見えるものが出力されました。input_fidelityの効果か、人物の顔や大きさなども違和感の少ない出来栄えでした。 3-2. Gemini 2.5 Flash Image 参照画像機能 を使用し、座席画像と家族写真を指定。 同等の意図で以下を入力しました。 使用したプロンプト In the image of the car’s back seat, place the three people from the provided family photo, making it look natural as if they are sitting together. - Fix lighting to match car interior lighting - Add natural shadows under people - Adjust color temperature to match - Make people look naturally seated - Blend edges smoothly - Keep faces unchanged but make them fit the scene - Add subtle reflections on windows if visible - Make clothing wrinkles look natural for sitting position 出力例(抜粋) 所感:複数回の試行で人物スケールや座り姿勢の一貫性が向上。 参照画像ありの方が良質でした。とくに車内の構造・材質は高い一貫性で再現。多少気になる点はあるものの、手軽にここまで整う点は大きな強みです。 参照画像なしの出力(参考) 所感:大きな傾向変化は少なめ。参照画像ありの方が安定という結論です。 3-3. 人物編集のまとめ gpt-image-1 :下処理の一手間はあるが、精度の高い合成が可能。input_fidelityにより人物の一貫性が保ちやすい。 FlashImage :参照画像機能で手軽に高品質。数回のリトライで座り姿勢・サイズ感が十分に整う。 共通 :内装や照明など背景一貫性は両者とも良好。 4. アウトプット比較②:文字を使った編集(日本語) 課題意識 :一貫性に加え、生成AIは 日本語テキストの精密再現 が難しい傾向があります。 そこで、人物が手に持つ雑誌の表紙を「日本語タイトル・特集文言」に差し替えるタスクで比較しました。 ![雑誌を抱える画像](/assets/blog/authors/aoshima/image_edit/16.jpg =50%x) 使用したプロンプト(日本語) 手に抱えている雑誌を以下の内容に置き換えてください。 - 日本の雑誌で、「旅立ち」というタイトル - おしゃれでモダンな方向性 - 寺院の特集で表紙はお寺の写真をフィーチャー - 表紙にはコンテンツ紹介の文言をレイアウト ※ タイトルを日本語にしたいため、プロンプトは 日本語で統一 。 4-1. gpt-image-1(インペインティング使用) 雑誌部分を マスク指定 して編集(自作アプリで実行)。 ![雑誌を抱える画像マスクあり](/assets/blog/authors/aoshima/image_edit/17.jpg =50%x) 出力例 \ 所感:タイトル「旅立ち」は正しく日本語で生成。ただし細かい本文テキストは崩れがち。 4-2. Flash Image(プロンプトのみ) インペインティング非対応のため、 全体をプロンプト指定 で実行。プロンプト内容は同等。 出力例 所感:雰囲気は再現できるものの、細部の日本語テキストの精密さはgpt-image-1 に一歩譲る印象。 5. 結論と使い分け 観点 gpt-image-1 Flash Image 2.5 タイトル再現 正確に日本語で出る 雰囲気重視 小文字の再現 崩れやすい 難あり 操作性 マスク機能で場所の指定が可能 プロンプトで指定 6. 結果まとめ 観点 gpt-image-1 Flash Image 2.5 一貫性 高い(input_fidelityで強化) 高い(参照画像で安定) 編集機能 マスク編集対応 参照画像対応 日本語テキスト タイトルは良好 雰囲気重視 利便性 API経由で利用(上級者向け) Web UIで手軽に利用可能 要点まとめ 精度・制御力重視 なら gpt-image-1 (特にマスク編集が活きるタスク)。 手軽さ・スピード重視 なら Flash Image (参照画像を活用)。 日本語テキスト は両者とも「雰囲気」は出せるが、 小さな文字や本文の精緻さ はまだ発展途上。 どちらも非常に完成度が高く、特に「全体の雰囲気が崩れない」という点で、以前の世代とは段違いです。 継続的に検証を進めながら、プロンプト設計や生成パラメータのチューニングについてはもちろん、新しいモデルについての検証なども今後共有していく予定です。
アバター
OpenAI vs Google Image Editing Showdown Testing the Consistency of gpt-image-1 and Gemini 2.5 Flash Image In recent years, OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude have emerged as major players in generative AI. Among these, OpenAI and Google offer AI models with image generation and editing capabilities. This article focuses on OpenAI's gpt-image-1 (as of July 2025) and Google's Gemini 2.5 Flash Image (nicknamed Nano Banana, Web UI as of August 2025) to compare them through actual output examples with an emphasis on image consistency and Japanese text handling. 1. What Are gpt-image-1 and Flash Image? gpt-image-1 (OpenAI) The underlying model for ChatGPT's 4o ImageGeneration. It has powerful generation and editing capabilities and supports inpainting (mask-based editing). Note that full functionality requires API access. Gemini 2.5 Flash Image (Google) The model has a fast, lightweight image generation feature that supports generation using reference images. One of the notable characteristics of the model (nicknamed Nano Banana) is its accessibility with a free user account. 2. A Weakness of Image Generation AI: Consistency When repeatedly generating and editing images with AI tools, you may often face a problem where the image appearance gradually drifts from the original one. This so-called lack of consistency results in changes in faces, body shapes, clothing textures, and background structures in an image as the number of generation iterations increases. Flash Image (August 2025) is relatively stable in this regard and became a topic of discussion on social media immediately after its release. gpt-image-1 also improved editing consistency through the input_fidelity parameter introduced in July 2025. 3. Output Comparison 1: Editing Human Poses Task: Naturally seat the people from a family photo on the right side in a vehicle's back seat in an image on the left side. 3-1. gpt-image-1 Since gpt-image-1 cannot directly reference multiple images (placing people from image B into image A), preprocessing with a rough composite was performed beforehand. The family photo was roughly cut out and overlaid on the seat image. Of course, input_fidelity = high was set. ![雑な合成画像](/assets/blog/authors/aoshima/image_edit/03.jpg =50%x) Prompt I Used Make this family photo look natural and realistic: - Fix lighting to match car interior lighting - Add natural shadows under people - Adjust color temperature to match - Make people look naturally seated - Blend edges smoothly - Keep faces unchanged but make them fit the scene - Add subtle reflections on windows if visible Output Example ![gpt-image-1の出力画像01](/assets/blog/authors/aoshima/image_edit/04.png =50%x) Impressions: While there are differences in clothing details and vehicle interiors, the sitting posture and shadow placement looked sufficiently natural only in a single generation. However, the color tone appears to have become yellowish. Facial consistency does not seem to be well maintained. 3-2. Gemini 2.5 Flash Image Using the reference image feature , the seat image and family photo were specified. I entered the following prompt for the same purpose of image generation using gpt-image-1. Prompt Used In the image of the car's back seat, place the three people from the provided family photo, making it look natural as if they are sitting together. - Fix lighting to match car interior lighting - Add natural shadows under people - Adjust color temperature to match - Make people look naturally seated - Blend edges smoothly - Keep faces unchanged but make them fit the scene - Add subtle reflections on windows if visible - Make clothing wrinkles look natural for sitting position Output Examples (Selection) Impressions: While facial consistency is high, there appears to be some variation in balance.\ 3-3. Summary of Human Editing gpt-image-1 : Requires preprocessing effort but enables high-precision compositing. Flash Image : Easy to achieve high quality with the reference image feature. Sitting posture and people's size become sufficiently adjusted with a few generation retries. Common : Background consistency for both interiors and lighting are good. 4. Output Comparison 2: Editing with Text (Japanese) Concern : In addition to consistency, generative AI often struggle with precise reproduction of Japanese text . Therefore, we compared AI models by assigning them to the task of replacing the magazine cover held by a person with a Japanese title and feature text. ![雑誌を抱える画像](/assets/blog/authors/aoshima/image_edit/09.jpg =50%x) Prompt I Used (in Japanese) 手に抱えている雑誌を以下の内容に置き換えてください。 - 日本の雑誌で、「旅立ち」というタイトル - おしゃれでモダンな方向性 - 寺院の特集で表紙はお寺の写真をフィーチャー - 表紙にはコンテンツ紹介の文言をレイアウト *Note: The prompt was written in Japanese from the beginning to the end because we wanted the magazine title to be converted into Japanese. 4-1. gpt-image-1 (Using the Inpainting Feature) Output Examples \ Impressions: The magazine title Tabidachi was accurately generated in Japanese. However, smaller feature text is prone to be distorted. 4-2. Flash Image (Only Unsing the Above Prompt) Since inpainting is not supported, I executed image generation, specifying the entire image via prompt . The prompt content was same as the above-mentioned one. Output Examples Impressions: While the appearance is reproduced, the precision of generating detailed Japanese text falls slightly behind gpt-image-1. 5. Conclusions and Use Cases Aspect gpt-image-1 Flash Image 2.5 Title reproduction Accurately outputs in Japanese Appearance-based Small text reproduction Prone to distortion Difficult 6. Results Summary Aspect gpt-image-1 Flash Image 2.5 Consistency High (enhanced with input_fidelity) High (stable with reference images) Editing features Supports mask editing Supports reference images Japanese text Japanese titles is properly reproduced Appearance-based Convenience Used via API (for advanced users) Easily accessible via Web UI Key Takeaways Choose gpt-image-1 for balance, precision, and control . Choose Flash Image for convenience and speed . As for Japanese text , both models can generate text that looks like Japanese, but their precision of generating Japanese small text and body sentence still needs improvements. Both are highly polished, and particularly in terms of maintaining the overall appearance without degradation, they are leagues ahead of previous generation models. Continuing for our verification, we plan to share findings on prompt design, generation parameter tuning, and testing of new models in the future.
アバター
技術広報Gの中西です。(FACTORY EC開発G・QAグループ兼務)こちらの記事は先日9/12にお台場で開催された InnerSource Gathering Tokyo 2025 についてのレポートになります。 インナーソースという言葉を耳にする機会は、ここ数年でぐっと増えてきました。弊社でもテックブログの開発やエンジニア文化の醸成を通じて、インナーソースカルチャーを推進するための小さな積み重ねを続けています。 今回のイベントでは学びが多く、私たちの取り組みこそがインナーソースカルチャーにつながっていると改めて確信しました。素晴らしい内容を多くの方と共有し、インナーソースという文化を広げていきたい — その思いから、このレポートを書いています。 「ふるまい」から文化を変える、現場発のインナーソース実装論 会場に流れていたのは、終始いくつかの共通トーンでした。 サイロを壊すのはスローガンではなく「ふるまい」の累積だ ということ。コードだけが貢献ではないこと。そして、正しさを声高に説くより、 小さく始めて仲間を増やす ことが文化を動かすということでした。登壇者それぞれの別の言葉、カルチャーの異なる現場の共有をされていました。 グラデーションで捉える「サイロ」 冒頭、運営からのメッセージは明快だった。 インナーソースは、文化的変化で社内のサイロを「壊す」ための試み だが、0/1で語れるものではない。組織ごとに濃淡がある。その前提で、 チャタムハウスルール(Chatham House Rule) を宣言し、登壇者や企業名に紐づけず内容を安心して持ち帰れる場にしていました。こういった大前提となるルールが定められていると積極的な生きた議論が出来て素晴らしいですね。イベントの始まりから参加して良かったと思えた瞬間です。 会場( docomo R&D OPEN LAB ODAIBA )を提供したNTTドコモさんは、巨大LEDや5G/エッジ環境まで備えた「作る・学ぶ・発信する」施設を紹介されていました。イベント日以外もコワーキングとして開放されており、「 エンジニアが自然に集まれる恒常的な場 」を設計していることが印象に残っています。 インナーソースは「ライセンス」ではなく「やり方」 基調講演は、OSS(オープンソースソフトウェア)の歴史と作法を手がかりに、インナーソースを「やり方」として捉え直した。 公開された議論・誰でも参加できる入口・コミュニティでの協働 ― このOSSの作法を、 社内に移植する のがインナーソースだという原点回帰だということが語られていました。 「 コード以外の貢献 」に光を当てた点も大きく、レビュー、テスト、トリアージ、翻訳、ドキュメント、インフラ運用、広報までが一級の貢献だと明言されていました。レビュー言語の作法に触れたくだりでは、 「受け取りは寛容に、出す言葉は厳密に」 というネットワーク界隈の原則を、対話の作法に重ねて紹介していました。 指摘は命令ではなく対話の始まり 。それを見て学ぶ第三者もいるからこそ、言葉が文化になるといえます。 もうひとつの軸は アジャイルとの親和性 だ。頻繁なリリース、自己組織化、要求の進化 ― OSSが長く実装してきたものと、アジャイル実践は結局似ている。だから変え方も同じ筋道をたどる。 行動を変え、考えが変わり、文化が変わる 。いわゆるエンジニアカルチャーなど、言葉の選択は色々あるが根本にある考え方や行動に関して共通するものを持っていると改めて感じました。 動機の話では、楽しさや学びに加えて キャリアや評判 が混ざる近年の傾向が紹介されていました。実例としては、 会社が成長・学習を後押しする設計 により、社内の貢献者が短期間で大幅に増えた事例が語られました。 質疑は実務的では、インセンティブは お金だけに寄ると長続きしない 。楽しさと学び、そして評価の三点留めで設計するのが良い。バーンアウト対策については、 最初から大勢に語らず、一人ひとりに声をかけて小さく勝ち、賛同者を増やす という手順が共有された。扱いにくい振る舞いに向き合う質問には、 行動規範を用意し、コミュニティ全体の「平均値」を上げる という堅実な答え。排除ではなく、 転じて味方になる構図 をつくる発想が貫かれていた。 https://kdmsnr.com/slides/20250912_innersource/ これらは Fearless Change アジャイルに効く アイデアを組織に広めるための48のパターン のパターンに当てはまるものが多く、組織の中で新たなことを推進するうえで是非オススメしたい。 NRI「xPalette」:ケイパビリティを循環させる 野村総研さんは、 エンジニアの創造性と主体性 を解放する場づくりの四年分の知見を語った。リファレンスアーキテクチャと個別ガイドを整え、各自が試した知見を 「学ぶ→現場適用→フィードバック」 で循環させる。 この循環が回ると、プロジェクト参画の機会が増え、複数の技術を組み合わせた 新しい事業の芽 も立ち上がる。活動を 事業価値で説明 し、予算→環境改善へと正のスパイラルを作る筋立てが現実的だ。若手の「 まず試す 」を小口の時間・費用枠で支える、 チャレンジを褒める のをマネジメントが率先する――こうした「手触りのある運営」が随所にあった。 三菱電機 OSPO/ISPO:社外の注目を社内の追い風に 三菱電機さんは、 OSPO(OSS)とISPO(InnerSource) を併走させる体制を立ち上げ、 プラットフォームを整える→オープンに載せる という習慣の普及から始めている。社外イベントで注目を集め、 外からの視線を社内の認知に逆流 させる語り口は大企業ならでは。社内イベントの連鎖で 用語が社内の共通語になっていく 過程が披露された。さらに、 11月13日に横浜でカンファレンスをホスト する案内もあり、ムーブメントの「 場 」を先に作る姿勢が印象的だった。 InnerSource Summit 2025 https://innersourcecommons.org/ja/events/isc-2025/ 対談:規程も「みんなで作る」/品質とは「使い手の価値」 対談は、 大企業でインナーソースを実践した現場感 がにじんだ。特に刺さったのは、 開発標準や規程といった「ルール類」こそインナーソースで作る べきだという提案だ。関係者を巻き込み、承認フローまで公開する。 KPIの扱いでは、 金額だけで説明しない 。人数、再利用、レビューのリードタイムなど 金額に寄与する手前の変化 を測る。品質の定義については、 「利用者にとっての価値」 を軸に置き、欠陥数だけで測らない姿勢が示された。生成AIをどう扱うかの問いには、 作り手が人かAIかは本質ではない 、 ユーザー教育と運用設計、フィードバックの回路 まで含めた品質管理が必要だという冷静な回答。 費用配賦や予算の壁をどう越えるかには、 「まず自チームで始め、フォロワーを増やし、制度は後から追随させる」 という、可能なところから現実的に進める作法が共有された。 KDDI「KAGreement」:オープンな合意が文化になる KDDIの登壇は、 「なぜここにいるのか」を言葉にする 取り組み。副社長も参加する 週次のFigJamセッション と Slackでの公開 を通じ、ワーキングアグリーメント(行動指針)を みんなで磨く 。 議論は「 公開・アーカイブ・小さく早く・チーム横断 」といったインナーソースのパターンとよく似た挙動になっている。全社ミーティングで ランダムなブレイクアウト を行い、部門も年次も混ぜて対話する設計は、 組織の「平均値」をじわじわ上げる 実装と言えた。背後では、 エグゼクティブ・スポンサーシップ が確かに支えている。 https://www.docswell.com/s/mitsuba_yu/KLVRX7-2025-09-12-163618 チームラボ:見つからなければ「無い」のと同じ チームラボさんのテーマは、 「使われて育つ」ための仕掛け だ。組織が大きくなるほど、「何がどこにあるのか」が見えない。そこで、 社内の「インナーソース部」 を立ち上げ、興味と仲間の可視化から着手した。 次に作ったのが InnerSource Portal 。リポジトリの概要、オーナー、導入手順、 どのプロジェクトで使われているか 、Issue/PRへの導線を 一枚に集約 する。Issueテンプレートも「質問・改善・機能追加…」と種類を分け、 書きやすさを設計 。 さらに、「 InnerSource Champion 」「最多貢献」「神Issue」「新人賞」など 称号と表彰 を遊び心を持って運用していく構想が語られた。定期のリリース共有会や、期限を切って 全員でひとつの社内OSSを作る日 のような企画も視野に入れている。狙いはただ一つ。 「使いたいと思ったとき、そこにある」 状態を増やすことだ。 https://speakerdeck.com/teamlab/innersource_gathering_tokyo2025_teamlab まとめ:小さな成功体験が文化になる 今年のISGTで響いたのは、どの登壇にも共通していた 「やさしい導線」 でした。 最初のPRが怖くないUI。最初のレビューが心地よい言葉。最初の採用が誇らしくなる称号。 そうした 小さな成功体験の積み重ね が、コミュニティ全体の「平均値」を押し上げ、サイロを溶かしていくのです。インナーソースは制度名ではなく、日々の ふるまいのデザイン であると強く感じました。 おまけ:懇親会の熱気 イベント後にはInnerSource OST(Open Space Technology)が行われ、各テーマごとに分かれてディスカッションが続きました。その流れで自然に懇親会へとつながったため、場は最初から大いに盛り上がり、活発な意見交換が続いていました。「カルチャーをどう良くしていくか」という問いに真剣に取り組む人たちが集まっていたからこそ、ただの交流ではなく、 次につながる対話 が数多く生まれたのが印象的です。
アバター
はじめに こんにちは! クラウドインフラグループの松尾です。 早いもので今年の8月で入社3年目に突入してしまいました。 今回は、S3イベント通知について、ちょっとした躓きがあったので 知識のアウトプットとして簡単にまとめようと思いました。 同じような問題に直面した方の参考になれば幸いです。 きっかけ 📋LambdaがファイルをS3にアップロードし、アップロードをトリガーとしてSQSにメッセージを送信したい とあるシステムで依頼があり、S3イベント通知を、 ObjectCreated:Put の場合SQSに送信するように設定し実現しました。 しかし、設定後しばらくしてSQSにメッセージが届いていないという指摘があり、その調査を行いました。 S3イベント通知とは? そもそもS3イベント通知とは、ざっくりいうとS3バケット内でイベントが発生した際に イベント駆動で他のAWSサービスに通知を送信できるS3の機能 です。 通知先は( Lambda/SNS/SQS )が選択できます。 📝 補足 今回は直接SQSへ通知するためS3イベントを採用していますが、Amazon EventBridgeを経由した通知も可能です。 複数サービスとの連携や複雑な設定が必要な場合はEventBridge連携も検討する必要があります。 最初の仮説と対応 実は通知先のSQSは一度名前を変更しており、併せてS3イベント通知の設定も更新していました。 設定後すぐに依頼者には確認いただき問題ないと連絡をいただいていたため、以下の仮説を立てました。 💭 SQSやS3イベント通知の設定がAWSの内部的な問題で更新できていなかった可能性があるのではなかろうか? そのためS3イベント通知/SQSを一度削除し、再作成し適当なtxtファイルをS3にPUTしたところ、イベントを正常に受信できました。 この時点では「問題は解決した」と考えました。 問題の未解決と新たな発見 調査及び対応で行ったことを伝え、しばらくすると、 再度依頼者からまだ問題が解決していないという旨の連絡がありました。 その際依頼者から 💬拡張子.txtのファイルで試してみたところ、メッセージ受信しました。.csvでメッセージ受信できるようになっていない可能性がありそう とのメッセージも添えられていました。 おやおや?と思いS3の中身を確認すると、あっ!と思った箇所が! 拡張子 サイズ .txt 数バイト .csv 約20Mバイト 拡張子の問題ではなく ファイルサイズ が原因の可能性が高そうだと判断して再度調査してみました! 原因の特定 ファイルアップロード処理を行っているLambda関数のコードを確認したところ、 S3.upload_file を見つけました。 def upload_file(temp_file_path: str, S3_bucket_name: str, S3_file_name: str): """ ファイルアップロード """ logger.info('---- Upload ----') S3 = boto3.client('s3') res = S3.upload_file(temp_file_path, S3_bucket_name, S3_file_name) logger.info(f"ファイル {S3_file_name} をアップロードしました") boto3の upload_file メソッドは、ファイルサイズが一定の閾値(8Mバイト)を超えると、自動的に マルチパートアップロード を実行します。[^1] 原因はここにありそうです🔍 S3イベントタイプが違った S3の設定イベントタイプとして設定していたのは ObjectCreated:Put この設定では、 ObjectCreated:Put による作成のみが通知対象となります。 しかし、マルチパートアップロードで発生するイベントは ObjectCreated:CompleteMultipartUpload マルチパートアップロードでは、 ObjectCreated:Put とは 全く別のイベント が発生することになります! つまり大きなファイルの場合、このような流れでイベントが起こらなかったのです LambdaがS3にファイルをアップロードする その際、 upload_file メソッドにより、自動的にマルチパートアップロードを実行 そのため発生したイベントは ObjectCreated:CompleteMultipartUpload 結果としてイベント通知設定が ObjectCreated:Put のみだったため、SQSに通知されない ちなみにマルチパートアップロードとは? マルチパートアップロードは、大きなファイルを複数の部分(パート)に分割してアップロードする仕組みです。以下の利点があります。 ^2 高速: 複数パートを並列でアップロード 信頼性: 失敗したパートのみ再送信 中断の再開: ネットワーク障害時でも途中から再開が可能 解決方法 S3イベント通知の設定変更 イベントタイプの設定を以下のように変更しました すべてのオブジェクト作成イベント (ObjectCreated:*) この設定により、以下のすべてのイベントが通知対象となります ObjectCreated:Put ObjectCreated:Post ObjectCreated:Copy ObjectCreated:CompleteMultipartUpload 設定変更後、大きなCSVファイルでもSQSにS3イベント通知が正常に送信されることを確認できました! 今回のケースでは、ファイルのアップロード元がLambda関数のみであることが明確だったため、 ObjectCreated:* (すべてのオブジェクト作成イベント)に設定しました 他の解決方法もある S3イベント通知の設定変更以外にも、boto3のコードでput_objectメソッドを使用することでも対応することは可能です。 ただし、今回はS3イベント通知の設定変更を選択しました。理由は アプリケーションコードを変更する必要がない 将来的な変更に対しても安定している 他のアップロード方法でも対応可能 S3イベント設定変更が最も汎用的で確実 だと思いますが、状況によってはコード側での対応も有効だと思います。 学んだこと 1. ファイルサイズを意識したテスト 小さなテストファイルでの検証では、マルチパートアップロードが発生せず、問題を見落とす可能性があります。S3を挟む処理の場合は 本番環境で想定されるファイルサイズでのテスト を実施することが重要です。 2. boto3の内部動作への理解 boto3の upload_file メソッドは便利ですが、内部でマルチパートアップロードを自動実行する場合があります。 この動作を理解して、適切なイベント設定を行う必要があります。 3. イベント通知設定の考慮点 今回は要件上、アップロード元がLambda関数のみと限定されていたため、 s3:ObjectCreated:* を選択しましたが、一般的には必要最小限のイベントタイプに絞ることが推奨されます。 重要なのは、boto3などのSDKが内部でどのようなアップロード方法を使用するかを把握し、それに応じた適切なイベント設定を行うことです。 まとめ 今回の問題は、以下の要因が重なって発生しました。 1. boto3が自動的にマルチパートアップロードを実行 2. S3イベント通知が`ObjectCreated:Put`イベントのみを対象に設定されていた 3. 小さなテストファイルでは問題が再現されない 小さなファイルでテストして安心していたら、実際の運用で問題が発覚する というのは、よくある落とし穴だと思います。 同様の問題で困っている方は、以下を確認してみてください。 S3にアップロードしたファイルのサイズ S3イベント通知の対象イベントタイプ このブログが少しでも参考になれば幸いです。 [^1]: boto3 TransferConfig公式ドキュメント
アバター
Hello! I'm Kasai from the SRE Team. KINTO Technologies Corporation participated in "SRE NEXT 2025" as a Platinum Sponsor! Thank you to everyone who visited our booth! It was inspiring to talk with so many participants, and I gained a lot of valuable insights. You can read about the roundtable discussion held by our members who attended SRE NEXT in the article linked below. Please take a look! https://blog.kinto-technologies.com/posts/2025-07-18-sre_next_look_back/ At our booth, we conducted a survey with the theme: "What's Your NEXT?" Thank you to everyone who took part. In this article, I’d like to share the survey results with you. Sticky notes on the board (left: Day 1, right: Day 2) ![] (/assets/blog/authors/kasai/20250731/20250724_162209.jpeg =400x) Stack of sticky notes for the two days Survey Results Over the two days, we received 312 responses to the survey. I categorized the responses into several themes, which I would like to share here. *The classification was done using Gemini. SRE & Organizational Culture (60 responses) These responses were related to the practice and promotion of SRE, fostering organizational culture, recruitment, and team building. Becoming able to promote SRE Spreading the SRE culture Hiring engineers successfully Creating an engineering organization that excites people! Building a common platform for Embedded SRE Many responses touched on topics such as how to behave as an SRE, how to spread SRE-oriented thinking, and the difficulty of hiring more engineers. I can strongly relate to these points, as I also think about how to effectively communicate the benefits of SRE thinking to other teams and help them recognize its value. AI Utilization (58 responses) These responses were related to improving operational efficiency and creating new value through the use of AI and LLMs. Applying AI to infrastructure and SRE Achieving Agentic DevOps Enabling incident response entirely with AI Reducing toil through AI utilization Achieving a seven-day weekend with AI!! Some participants were already using AI and wanted to expand its use, while others were planning to start using it. I also use generative AI, but only as an aid when writing code. For example, I have not yet reached the point where I can have AI handle all incident responses. I would like to challenge myself to find ways to use it beyond coding, such as in areas like incident response. Technology & Service Improvement (58 responses) These responses were related to improving service quality, including the introduction of SLI/SLO, enhancing performance, and resolving technical debt. Introducing SLI/SLO Significantly improving performance to enhance user experience Resolving technical debt Fully automating operations Working with eBPF Many responses focused on introducing SLI/SLO and expanding observability. I still have much to learn about SLI/SLO, and my experience with implementation is limited, so I would like to continue practicing and building my knowledge and expertise. Business & Career (41 responses) These responses were related to contributing to business, product growth, IPO, career changes, and promotions. Understanding the business side and applying it to work Business Growth IPO Become a CTO Changing jobs In addition to technical topics, there were also responses related to business. I feel that it is necessary to discuss reliability not only with the development side but also with the business side. There were others who shared the same view, as well as those who were thinking about how to grow the business. At present, I still feel somewhat distant from the business side, so I would like to close that gap and contribute to the business from an SRE perspective. Speaking & Output (30 responses) These responses were related to sharing information externally, such as speaking at conferences or writing blog posts. Preparing for a keynote talk Outputting every month Hosting PHP Conference 2026 Hokkaido Submitting many CFPs Becoming a speaker Perhaps because it was a conference setting, there were many responses related to speaking engagements. Some people even wrote that they would speak at SRE NEXT, which made me feel that it’s wonderful we are able to host events that make people want to present. I would also like to spread the word about our company through blog posts and speaking engagements so that more people can learn about who we are. Other (65 responses) These were private goals or unique responses that did not fit into the categories above. Go to a sauna See penguins in Antarctica Be happy Have dinner Go on a trip Some participants also wrote things unrelated to work or SRE. It reminded me that maintaining health and taking time to refresh are also important for doing good work. Since we had the opportunity, we also asked Gemini to put together a summary. Thank you so much for the 312 responses we received. Looking over the results, it is clear that today’s engineers have a healthy, well-rounded set of interests, covering technology, organization, business, and even personal goals. The most common responses were in the "SRE & Organizational Culture" category, which shows that people see SRE not just as a technical role but as part of the culture of the team and the entire organization, and that they are strongly motivated to foster and develop this culture. This was closely followed by "AI Utilization" and "Technology & Service Improvement," indicating a balance between the desire to explore cutting-edge technology and a strong awareness of SRE’s core responsibility of improving service reliability. The fact that these three categories ranked at the top in almost equal numbers symbolizes a very balanced view of engineering. There were also many responses related to "Business & Career" and "Speaking & Output," which was notable because it showed that many people view their role from a broader perspective, going beyond their daily work to contribute to the business and give back to the community. Finally, the "Other" category clearly reflected values that prioritize well-being, including health, personal life, and individual dreams, alongside work. Overall, the survey results were highly insightful, painting a picture of the modern, mature engineer—someone who pursues technical excellence, contributes to the organization, people, and business, and also seeks fulfillment in their personal life. The above are the survey results and summary. The theme of SRE NEXT 2025 is "Talk NEXT." This is why we decided to conduct this survey with a topic related to the theme, with the hope of engaging in a dialogue with people who visited our booth. By choosing a highly abstract topic, we were able to have conversations not only with SREs, but also with engineers in other roles, and even with non-engineers. I feel it turned out to be a very good topic. Many of the responses touched on generative AI, which has been a recent trend, and I got the impression that many people are thinking about how to apply AI in the SRE field. As for myself, while I can collect system metrics, I am still not making full use of that data to support decision-making, influence the business, or contribute to operations. I believe that being able to do so would make my role as an SRE even more interesting, and I would like to take on that challenge. I could relate to many of the other responses as well, and I enjoyed having conversations at the booth on the day. Thank you very much! In Closing Once again, thank you very much to everyone who visited our booth! We were delighted to welcome such a large number of visitors. Many thanks also to the SRE NEXT organizers for putting on a wonderful event. I believe the stamp rally and other activities played a big part in bringing so many people to our booth. I hope to be involved in some way in next year's SRE NEXT as well. See you again at SRE NEXT next year! We Are Hiring! KINTO Technologies is looking for new teammates to join us! We are happy to start with a casual interview. If you’re even a little curious, please apply using the link below! https://hrmos.co/pages/kinto-technologies
アバター
KINTOテクノロジーズ(以下KTC)で my route(iOS)を開発しているRyomm( @__ryomm )です。 2025年9月19-21日の3日間にわたって開催されたiOSDC Japan 2025にゴールドスポンサーとして協賛しました✨ 昨年に引き続き、2回目の協賛となります。 1年の間に様々なカンファレンスでスポンサー出展してノウハウを身につけた技術広報の力を借りつつ、iOSアプリ開発に携わるエンジニアが中心となって準備を進めてきました。社内のクリエイティブ室と協力して作成した、こだわりのノベルティやブース企画を紹介します。 「アプリのひみつ アプリ内製開発ストーリー」冊子 こちらはノベルティBOXに封入した特製冊子です。 昨年の反省を踏まえ、今年は紙媒体にしようと決めていました。 しかし、ただのチラシというのも味気なくつまらないと思ったので、絵本のような質感で手に取りたくなる冊子を目指して制作しました。 予算との戦いがありつつも、検討を重ねた末にやわらかい質感の「ヴァンヌーボVG スノーホワイト」という用紙を採用しました。KTCで開発しているアプリのストーリーを楽しく紹介しています。 ぜひご一読いただき、KTCのアプリ開発に触れてみてください。 ブース企画 今回のブースのテーマは「KTCを知ってもらう」でした。 今年度のKTCでは AIファースト が注力テーマのひとつに据えられています。 そこから、案出しのマンダラートにて「VTuberのようなキャラクターが喋っていたらインパクトがあるのではないか?」というアイディアが生まれ、紆余曲折を経てKTCのAI広報「るぴあ」がブースでじゃんけんをすることになりました。 今回るぴあをブースに立たせるにあたって、クリエイティブ室の桃井さん( @momoitter )に全面協力いただきました。 るぴあ誕生秘話についてはこちらの記事をご覧ください。 https://blog.kinto-technologies.com/posts/2025-03-07-creating_a_mascot_with_generative_AI/ るぴあをブースに立たせるにあたって等身大になる大型のサイネージを購入したのですが、会場でストリーミング再生をするのは品質に不安があったためオフラインで動画をシームレスに再生できるようにMacアプリを作成しました。 こちらのMacアプリはヒロヤさん( @TRAsh___ )との共作です。 プロトタイプ版はGemini CLIを利用して1時間半ほどで完成しました。 プロトタイプ版では VideoPlayer に AVPlayer を渡し、AVPlayerItemを差し替えて動画を再生しています。 待機・呼び込み動画はそれぞれ何種類かあり、全てデフォルトポジションで始まり、デフォルトポジションに戻るようにすることで動画をなめらかに繋げられるようにしました。 ただこちらの実装では動画間のチラつきが激しく、シームレスな再生とは言えません。 そこで AVPlayer から AVQueuePlayer に変更し、次の動画まであらかじめキューに詰めておくことで事前にロードし、シームレスに動画が再生されるように改修しました。 じゃんけんモードへの移行/勝ち負け動画への移行等に関してはキューを割り込ませる必要があり、若干のチラつきは発生しますが、その他の部分ではきれいにつながるようになったと思います。 さて、そんなじゃんけんを勝ち抜いた猛者の方々にはトミカ、残念ながら負けてしまった方々にはありがとうめぐリズムをプレゼントしていました。 遊びにきていただいたみなさま、ありがとうございました!またどこかでパワーアップしたるぴあと遊んでくださいね。 https://x.com/KintoTech_Dev/status/1969236820270420032 さらに、じゃんけんだけではコミュニケーションに不安があったため、話のタネとして「iOSアプリ開発で楽しいことは?」というテーマでアンケートも実施していました。 こちらにもたくさんご回答いただき、ありがとうございました! 最後に、参加したKTCのメンバーで記念写真を撮りました📸 登壇情報 弊社から2名登壇しておりました🎉 iPhoneを用いたフライトシム用ヘッドトラッカーの自作事例 by Felix Chon https://fortee.jp/iosdc-japan-2025/proposal/dfe5819b-6eb7-4880-a89e-411a839b794c QRコードの仕様ってn種類あんねん by Ryomm https://fortee.jp/iosdc-japan-2025/proposal/a1ddb24c-ecc2-4db7-8bce-5a002a1489e1 ぜひニコ生タイムシフトやYoutubeからご覧ください。
アバター
Hello, this is Hoka winter. For about a year, KINTO Technologies (KTC) has been running the 10X Innovation Culture Program announced by Google Cloud Japan G.K. in September 2023 to foster an innovation-driven organizational environment. This time, aside from the usual 10X, I will talk about the 10x Innovation Culture Pitch practice session we attended. What is the 10x Innovation Culture Pitch Practice Session? The purpose of this training is to develop the facilitation skills necessary to implement the "10X Innovation Culture Program" within your company. To do this, you need a deep understanding of the 10X Innovation Culture Program. This training is designed to deepen that understanding. This was our second time taking part in the training. Last time, the training was mainly attended by managers. Since then, the progress of 10X has been dramatic, so this time, volunteer members—mainly team leaders—took part. The 10x Innovation Culture Pitch practice session is broadly divided into two parts. One part involves “learning the six elements for creating innovation,” and the other focuses on “expressing the six elements in our own words.” ![](/assets/blog/authors/hoka/20250714/image6.png =600x) Preparation for the Training What I've learned about 10X from Google people is that the difficulty level for KTC gradually increases. In the first training session, all KTC employees “merely participated,” but in the second session, KTC employees took on the role of presenters for the culture session. In other words, they play an important role as presenters who convey the six elements for creating innovation to other participants. ![](/assets/blog/authors/hoka/20250714/image3.png =600x) Thankfully, the presentation slides were prepared by the people at Google, so all we at KTC had to do was read out the six elements. Even though it was just that simple, it was incredibly difficult!!! The six elements contain many of Google's ideas and examples of how to be an innovative organization. However, simply reading them will not reach the hearts of participants. We practiced many times until we could speak in our own words, incorporating episodes from KTC and our own experiences. In particular, we remembered the Google presenters from the first training session and focused on speaking confidently and at an easy-to-follow pace. ** On the Day of the Training** The day has finally arrived. 27 people gathered at the Google office in Shibuya. Participants once again joined from Osaka, Nagoya, and Tokyo. The day kicked off with an opening talk by Google’s Kota-san. Many thanks to Kota-san, as always. Next, Kissy, the manager who is the main leader of 10X, shared an encouraging message online from the Nagoya office. Amid an atmosphere of “Huh? What’s starting now? What is this training?” we, the presenters, took turns announcing themes one by one. Can we get the participants to understand 10X? Awacchi presented with an original story, I was overly nervous, Yukiki appeared online, Nabeyan was calm like a teacher, Mizuki gave the best performance on the actual day as usual, and Otake had the composure to make others laugh. Everyone performed their best on this very day (if we do say so ourselves). In the post-event survey, as many as 10 participants chose "The culture session was great.” I was also happy to hear comments like "It was just as great as the last Google presenters" and "The talk flowed so naturally—I could follow everything just by looking at the slides and listening to the presentations." Next was the output session. Each team, consisting of six people plus one Googler, moved to their assigned room, and just like the earlier presenters, each person took turns giving a presentation. It was an intensive output time of 20 minutes × 6 people, totaling 120 minutes. The participants used the same slides that the presenters had used earlier, and each gave a 10-minute presentation. There was a 5-minute preparation time before each presentation. While listening to the presentations, other members filled out feedback sheets with points they liked and points requiring improvement, and then provided feedback after each presentation. ![](/assets/blog/authors/hoka/20250714/image1.png =600x) I was part of Team D, and they were so good that I couldn’t help but wonder, "Did they practice at home?" During the feedback time, we naturally discussed the good points of the presentations, and the discussion became lively. For example, the following comments were made: Speak while summarizing, without being bound by slides or a script. Speak in your own words. Stories of failure tend to resonate with the audience. Be empathetic to the audience and avoid imposing too much of a lecturing tone. Catchy phrases like "Motivation Switch” are effective and make things easy to understand. ![](/assets/blog/authors/hoka/20250714/image7.png =600x) In the post-presentation survey, satisfaction with the program was very high, with an average score of 4.7. The following points were selected as “positive aspects of the program content.”: (n=22, multiple answers allowed) It was good to be able to listen to other participants' presentations: 20 people I was glad to have the opportunity to practice myself: 17 people It was great to receive feedback from others: 21 people Closing After the presentations, we held a wrap-up session in the original seminar room. While I wondered how the other groups were doing, Google people summarized the earlier feedback sheets for us using generative AI, "Gemini." ![](/assets/blog/authors/hoka/20250714/image4.png =600x) I was planning to check the feedback sheets from the other groups later, but they were instantly converted to text via Gemini and shared with everyone on the spot. It truly was a “Feedback is a gift!” moment. Not only did we learn the training content itself, but we also gained a lot of tips on how to be more efficient—such as how to install tools quickly, how to make use of feedback sheets, and how to share information from other groups. Thank you so much to everyone at Google. Looking Ahead Through this training, we found that the high-level 10x Innovation Culture Pitch practice sessions are effective even for non-managerial members. So, we plan to implement them at KTC in FY2025. KTC’s challenge to foster innovation is far from over. ![](/assets/blog/authors/hoka/20250714/image8.png =600x)
アバター
Introduction Hello! This is Otaka from the Cloud Security Group in KINTO Technologies. On a daily basis, I work on setting up guardrails for our cloud environments and keeping them safe through monitoring and improvements with CSPM and threat detection. To stay up to date on the latest security trends, I joined the Hardening Designers Conference 2025 . Here's my report from the event. What is the Hardening Project? The Hardening Project is a competitive event aimed at improving practical cybersecurity skills. Participants run vulnerable systems while defending against, recovering from, and improving them in response to external attacks, building real-world incident response skills in the process. What sets it apart is that it evaluates not only technical skills but also overall incident response capabilities, including teamwork, documentation readiness, and the establishment of an operational system. The Hardening Designers Conference 2025 that I participated in this time is a hands-on and conference event themed "invisible divide." It served as preparation for the competitive event in October. Day1 Hands-on Program In the hands-on session, we experienced an attack technique called "Living off the Land." This technique involves attackers abusing legitimate, built-in tools and functions already in a system to gain access and carry out harmful activities. For example, in a Windows environment, they use PowerShell, WMI, etc. to conduct the attack. The key is that they use built-in tools and functions, not files brought in from outside. This makes it hard to tell their activities apart from normal operations, and difficult for security tools to detect. Some of the commands used in the attack were ones I'd relied on back in my days as a system administrator. If a tool or command isn't used in normal operations, disabling it might help. But for those frequently used and hard to turn off, the only real option may be to log and monitor them closely. The workshop was a real eye-opener, showing just how sophisticated server attacks have become, and how blurred the line is between malicious activities and legitimate operations. Day2 & 3 Conference Program On Day 2 and 3, a variety of lightning talks and sessions delved into the theme of "divide" in the context of cybersecurity, with speakers sharing the latest technologies, introducing new initiatives, and hosting lively discussions. In the security field, there are often "divides" between different stakeholders, and these can become obstacles that hinder Hardening (security fortification) efforts. For example, divides like the following often arise in actual operations: Divide between Development, Operations, and Security Sometimes, the focus on implementing features and improving operational efficiency can push security to the back seat. For instance, poor password management or weak account controls can create security vulnerabilities. To avoid this, think of security not as a "restriction" but as part of overall "quality," and make sure security requirements are built in from the very start through security-by-design and shift-left practices. Divide between System Users and Developers/Operators While system users desire ease of use, they may not fully grasp the importance of security. Engineers, on the other hand, often find themselves caught in the middle, trying to balance user requests for features with the need to keep the system secure. To bridge this gap, it is necessary to educate users and maintain careful communication with them during system development and operation, fostering their understanding of security. Divide between Rule Makers and Implementers Security personnel who set rules often draw on a range of guidelines from public bodies and specialist organizations to define ideal baselines and rules. However, for those on the ground such as system development and operations teams, system constraints and operational workload can make it hard to implement them as intended. To put this into practice, it is important to take into account constraints and operational loads and take a flexible approach so that security can be implemented properly. Divide between Attackers and Defenders While attackers use technological innovation and teamwork to launch increasingly sophisticated attacks, defenders can end up reacting too slowly because of costs or a lack of understanding among stakeholders. Companies hit by cyberattacks are often reluctant to disclose details, which means valuable knowledge that could prevent similar incidents is not shared in many cases. The defense side would also like to strengthen information sharing and cooperation, but things are not going as smoothly as expected. Divide between AI and Humans Efforts to utilize generative AI are spreading in the IT field, from writing program code to upgrading SOC operations. But in reality, AI often can't take security into account without clear and specific instructions, . Generative AI has come a long way, yet there still seems to be a gap between what humans and AI can do. To utilize AI properly, we still need human know-how, like designing effective prompts and setting up guardrails. When we think about it again, we can see just how many kinds of divides exist. I had never looked at security from this angle before, so this was genuinely insightful. Overcoming the Divides—KINTO Technologies' Approach At the Cloud Security Group, our basic policy is "security for business," and we believe that security should accelerate business operations, not slow them down. We work on security from the following two key angles: Preventative guardrails: We provide security preset account environments with the minimum required security settings already implemented before handing them to development teams. This helps support secure design from the very beginning. Detective guardrails: We use SOC monitoring with tools such as Sysdig, AWS Security Hub, and Amazon GuardDuty to detect and respond to threats in real time. Through regular Posture management, we also conduct kaizen activities to improve problematic configurations. Through these security measures and operations, we are working to create an environment where developers can focus on their work with peace of mind, while adhering to our company's security guidelines and ensuring the necessary security. In short, this is an effort to bridge the divide between rule-makers and implementers, and between development, operations, and security teams. We have also begun taking gradual steps toward AI security (see details here ). However, with technology and trends evolving so rapidly, it feels as though we are currently a little on the back foot. Within the company, the use of generative AI in business operations and its implementation into products are advancing actively, and the challenge ahead will be determining how to implement effective controls while overcoming the divide with AI. Furthermore, we are working to review our mindset toward system development projects at KINTO Technologies, drawing on the IPA's key points for requirement clarification with a house-building analogy as a reference. This is not limited to system construction; from a security perspective as well, it is an effort to be mindful of the divide between system users and engineers, and to foster better relationships and results. For more information about IPA house-building, please see here (in Japanese) . Summary Through the Hardening Designers Conference 2025, I had the valuable opportunity to learn about security trends from the perspective of divide, something I had not consciously considered before. By looking at my own organization's security through the same lens of divide, I was also able to reaffirm our current initiatives. Going forward, I hope to continue and refine our efforts to overcome divides and achieve better security. Lastly Our Cloud Security Group is looking for people to work with us. We welcome not only those with hands-on experience in cloud security but also those who may not have experience but have a keen interest in the field. Please feel free to contact us. For additional details, please check here (in Japanese).
アバター
Introduction Hello! I’m Uehira from the DataOps Group in the Data Strategy Division at KINTO Technologies. I’m mainly responsible for the development, maintenance, and operation of our internal data analytics platform and an in-house AI-powered application called "cirro." "cirro" uses Amazon Bedrock for its AI capabilities, and we interact with it via the AWS Converse API. In this article, I’ll share how I tested Strands Agents in a local environment to explore tool integration and multi-agent functionality for potential use in "cirro." Intended Audience This article is intended for readers who have experience using Amazon Bedrock via the Converse API or Invoke Model API. What Is Strands Agents? Strands Agents is an open-source SDK for building AI agents, released on May 16, 2025, on the AWS Open Source Blog. The diagram below is from the official Amazon Web Services blog: As shown in the diagram, implementing an AI that can use tools requires a processing structure known as an Agentic Loop. This loop allows the AI to determine whether its response should go directly to the user or whether it should take further action using a tool. With Strands Agents, you can build AI agents that include this loop without needing to implement it manually. Source: Introducing Strands Agents – An Open-Source AI Agent SDK Running Strands Agents in a Local Environment *This section assumes that you have prior experience using Bedrock via the Converse API or similar tools. Therefore, basic setup steps such as configuring model permissions are omitted. Exception handling is also skipped, as this is a sample implementation. Setup Libraries Install the required libraries with the following command: pip install strands-agents strands-agents-tools Execution ① If you're lucky, the following minimal code might work: from strands import Agent agent = Agent() agent("こんにちは!") This code appears in many blog posts, but it didn't work in my environment. 😂 Which makes sense—after all, it doesn’t specify the model or the Bedrock region... Execution ② To call the model properly, you need to explicitly specify the model and region like this: In this example, we assume an environment similar to ours, where you log in via SSO and obtain permissions through a switch role. 【Point】 Make sure the model and region you specify are accessible with the assumed role. Example: Model: anthropic.claude-3-sonnet-20240229-v1:0 Region: us-east-1 *The region is specified in the profile when creating the session. import boto3 from strands import Agent from strands.models import BedrockModel if __name__ == "__main__": # セッション作成 session = boto3.Session(profile_name='<スイッチ先のロール>') # モデル設定 bedrock_model = BedrockModel( boto_session=session, model_id="us.amazon.nova-pro-v1:0", temperature=0.0, max_tokens=1024, top_p=0.1, top_k=1, # Trueにするとストリーミングで出力される。 # ストリーミングでツール利用がサポートされないモデルがあるため、OFF streaming=False ) # エージェントのインスタンスを作成 agent = Agent(model=bedrock_model) # 質問を投げる query = "こんにちは!" response = agent(query) print(response) Now you can call Bedrock with parameters like temperature, just like you would with the Converse API. 🙌 But if you're using Strands Agents, of course you'll want to call a tool ! Execution ③ If you define a tool as shown below, the agent will use the appropriate tool based on the question and return a response after executing the Agentic Loop. 【Point】 The function you want to use as a tool is decorated with "@tool". Tools are passed as a list of functions, like this: Agent(model=bedrock_model, tools=[get_time]) import boto3 from strands import Agent from strands.models import BedrockModel #------ツール用に読み込んだライブラリ------------ from strands import tool from datetime import datetime # ツールの定義 @tool(name="get_time", description="時刻を回答します。") def get_time() -> str: """ 現在の時刻を返すツール。 """ current_time = datetime.now().strftime("%Y-%m-%d %H:%M:%S") return f"現在の時刻は {current_time} です。" if __name__ == "__main__": # セッション作成 session = boto3.Session(profile_name='<スイッチ先のロール>') # モデル設定 bedrock_model = BedrockModel( boto_session=session, model_id="us.amazon.nova-pro-v1:0", temperature=0.0, max_tokens=1024, top_p=0.1, top_k=1, # Trueにするとストリーミングで出力される。 # ストリーミングでツール利用がサポートされないモデルがあるため、OFF streaming=False ) # ツールを使用するエージェントのインスタンスを作成 agent = Agent(model=bedrock_model, tools=[get_time]) # 質問を投げる。 ツールを使用しないとAIは時刻が判別できない。 query = "こんにちは! 今何時?" response = agent(query) print(response) In my environment, I got the following response: <thinking> 現在の時刻を調べる必要があります。 そのためには、`get_time`ツールを使用します。 </thinking> Tool #1: get_time Hello! 現在の時刻は 2025-07-09 20:11:51 です。 Hello! 現在の時刻は 2025-07-09 20:11:51 です。 Advanced Use Regarding the tool, in the previous example, it simply returned logic-based output. However, if you create an agent within the tool and incorporate additional logic, such as having the agent verify a response, you can easily build a multi-agent system where one AI calls another. Here’s a modified version of the tool that returns not only the current time, but also a trivia fact provided by a child agent: 【Point】 We’re reusing the session object declared in the global scope under if __name__ == "__main__": . If you don't do this, model setup takes about a minute in my environment. This is probably due to the time required to allocate resources. @tool(name="get_time", description="現在日時と、日時にちなんだトリビアを回答します。") def get_time() -> str: """ 現在の時刻を返すツール。 注意:この関数では boto3.Session を使った BedrockModel の初期化に グローバルスコープで定義された `session` 変数が必要です。 `session` は `if __name__ == "__main__":` ブロックなどで事前に定義しておく必要があります。 """ current_time = datetime.now().strftime("%Y-%m-%d %H:%M:%S") # モデル設定 bedrock_model = BedrockModel( boto_session=session, model_id="us.anthropic.claude-sonnet-4-20250514-v1:0", temperature=0.0, max_tokens=1024, top_p=0.1, top_k=1, streaming=False ) agent = Agent(model=bedrock_model) # ここが子エージェントから回答を得る部分! response = agent(f"現在の時刻は {current_time} です。 日時と日付にちなんだトリビアを1つ教えてください。") return f"現在の時刻は {current_time} です。{response}" Here’s the final response I got from the AI: Hello! The current time is 2025-07-10 18:51:23. Today is "Natto Day"! This commemorative day was established based on the wordplay "7 (na) 10 (to)." It was started in 1992 by the Kansai Natto Industry Cooperative Association to promote natto consumption in the Kansai region. Interestingly, while natto has long been popular in the Kanto region, many people in Kansai don't like it. This day was created in hopes of encouraging more people in Kansai to enjoy natto. Today, "Natto Day" is recognized nationwide, and many supermarkets offer special discounts on natto to celebrate. Since it's around dinner time, how about giving natto a try today? Notes While multi-agent systems are relatively easy to implement, in practice, calling multiple AIs increases both token usage and response time, which makes it tricky to decide when to use them. Below is a breakdown of the processing costs when using both a parent and a child agent: Category Parent Agent Child Agent Total Input Tokens 1086 54 1140 Output Tokens 256 219 475 Processing Time 7.2 sec 7.3 sec 14.5 sec As you can see, the overall processing time doubles when the child agent's response is included . For that reason, it may be more practical to limit multi-agent use to cases where output diversity is required or the task is too complex to handle with rule-based logic . Conclusion This time, in order to expand the AI utilization system "cirro" developed by the Data Strategy Division, I introduced the key points for running Strands Agents based on my testing. There were more unexpected pitfalls than I had anticipated, and I hope this article will be helpful when you try it out yourself. Using Strands Agents makes it easy to extend functionality with tools and child agents. At the same time, some challenges became apparent, such as increased processing time and token usage, as well as permission management when integrating with systems. The "cirro" system mentioned in this article is a completely serverless system developed in Python, and one of its key features is that users can flexibly expand tasks and reference data on their own. Currently, we are using it for dashboard guidance, survey analysis, and other internal applications. There is an introductory article on AWS about it, and I hope to share more details in the future! AWS Cirro Introduction Article
アバター
Hello everyone, I’m Mori from the Global Development Division and the Tech Blog team. I usually work as a web product manager (PdM) in the Global Development Division. I, along with several colleagues from my company, attended iOSDC Japan 2023 held from September 1 to 3, 2023! Since an iOS engineer who attended the iOSDC Japan 2023 will cover the content of the interesting sessions, I’ll share my impressions from an event management perspective😎 (#iwillblog is putting pressure on me lol) https://iosdc.jp/2023/ Reason for Participation Why did I participate in iOSDC even though I'm not an iOS engineer? In fact, recently the Tech Blog team has been supporting the operation of study sessions for external audiences and planning internal events. Recently, we have supported the operation of events such as the KINTO Technologies Meetup! hosted by the Corporate IT team and the DBRE Summit 2023 organized by the DBRE team. We are still a relatively new team, so even during planning and events, we are full of concerns like, “How can we make the participants enjoy more?” “How can we facilitate the event better?” 🤔 Hearing the rumor that iOSDC is incredibly exciting, I infiltrated the conference for three days to learn about its planning ideas and how to create such a lively atmosphere!! Our iOS engineers reported on the 2022 iOSDC, so please be sure to check it out as well 👍 https://blog.kinto-technologies.com/posts/2022-10-13-iosdc2022_participation_report/ Reception Upon entering the venue, we first checked in and received name cards. These cards contain QR codes and NFC chips for entrance and exit management. QR codes are used for managing entrance and exit at the venue, while NFC chips allow attendees to collect name badges via an app. This idea is great. I better take notes.📝 https://blog.iosdc.jp/2023/09/01/name-card-nfc-tag-exchange-2023/ In fact, on the second day, I completely forgot my name card, but even then, they smoothly gave me a new one, kindly saying, “You forgot your name badge.” I was impressed by their fine consideration. I better take notes.📝 Sessions The sessions were held across four rooms: two large and two small ones. Attendees were able to participate in the sessions they wanted. 🔻Venue layout: I listened to the talks on the second floor. The first floor was for booths for communication with sponsors. Drinks were available. I had confirmed the timetable for the day before the event, and I was impressed by how much effort went into deciding which talks to give at which times and in which rooms. With four rooms running sessions simultaneously, the time for each session would vary depending on its content, and the days the speakers could attend would also differ. This coordination skill is something I want to acquire. The title and speaker name of each session were read aloud via recording by Fumihiko Tachiki , a famous voice actor known for narrations in TV programs such as The Quest a.k.a. “Sekai no Hate Made Itte Q!” Both the audience and the speakers got excited🥳 Online Distribution By purchasing an on-site attendance ticket, attendees could also watch the sessions online. (There were also tickets available for online viewing only.) The online sessions were streamed via Niconico Live. I watched online the sessions I couldn't attend in person, and what surprised me was that they were streamed almost in real time! It may depend on the device or environment, but the time lags were probably less than five seconds. I was able to chat with on-site participants in real time via Slack. What was impressive was that, during the venue changeover, the streaming screen not only showed commercials from sponsors but also footage of the staff working during the preparation period. We also started hybrid streaming from the August event, so I was watching it while thinking that it might be interesting to stream something like this during the break in our next event. Lightning Talk (LT) Session Even before attending, I was curious about the schedule, which had six or seven 5-minute lightning talks. Since I also usually facilitate internal events and meetings, I was thinking, “Is it really possible to pull off this kind of agenda?” — but they did it!😂 First of all, I thought the most difficult thing would be to finish the presentation on time. Their ingenuity on this point is amazing…! When the 5-minute limit approached, music played to create a sense of urgency for each speaker, and the audience was instructed to wave penlights . It was a great idea to encourage the speakers while ensuring they stick to their allotted time. Plus, it’s fun for those waving penlights…! (It was a production style different from last year.) *They cheered for the speakers with penlights in their signature colors. * To ensure each 5-minute speech proceeded smoothly, no time was allocated for Q&A; instead, a system was set up for attendees to approach the speakers later at other booths. The only preparation needed between speakers was setting up their presentation materials. During this time, the audience are informed of the next speaker’s “signature color” so that they can prepare their penlights accordingly. Of course, that alone would have left some extra time, so the emcee skillfully introduced booths and shared behind-the-scenes stories about the speakers, making the waiting time feel short for the audience. 👏 Of course, the production was great, but this Lightning Talk (LT) Session was also very interesting in terms of content. Since the session was supposed to be cut off after five minutes, it was impressive to see how each speaker came up with creative ways to summarize it. While some speakers probably didn't get to say everything they wanted to say, their time management skills were impressive, and it didn't really feel that way from the audience. I think most people who have experience with presentations or speeches would feel that it’s incredibly difficult to concisely summarize what they want to say in a short time. 😭 I myself tend to talk at length because I’m quite chatty. Structuring a talk with a clear beginning, middle, and end—while also adding a touch of humor—makes this kind of event an incredibly valuable opportunity for a great presentation! Also, giving a short presentation helps sharpen time management skills. Watching them glance at the remaining time and instantly decide things like “I’ll cut this part,” all while effectively conveying their main points, made me think, “They must also be really good at facilitation.” As the Tech Blog team also aims to improve the employees’ output skills, I’d like to incorporate opportunities like this within the company as well.😎✨ Tech blog members enjoying penlights Doing What I Want to Do = Maybe Everyone Will Have Fun…!? This is my simple impression from attending iOSDC, but overall, it was a very meaningful conference even for non-engineers like me. Of course, I don’t understand technical details, but my desire to improve our company’s products is the same as that of the engineers. I participated to learn about event management, but from a PdM perspective, I also realized, “So this is what engineers are thinking,” and thought, “If we incorporate these ideas into our products, they might improve even more!” From the perspective of my original purpose—event management—it was an extremely valuable conference. 🤩 During the social gathering, I fortunately had the opportunity to talk with Mr. Hasegawa , the chairperson of the executive committee. When I asked him about overall event planning and efforts to make an event exciting, he said, “I’m just embodying what I want to do myself.” I thought, this is really a profound truth. Even now, after the event has ended, it really resonates with me. Before participating, I was worried about “How could I make everyone enjoy our event?” but then I came up with a new idea: “If I try doing what I find interesting, maybe everyone else will enjoy it too.” Of course, whether it resonates with the audience or not is another matter, but gaining this new perspective has ignited my passion to plan and manage various events going forward. 🔥🔥🔥 At KINTO Technologies, we will continue planning events for external audiences. We will provide information as needed through Connpass (in Japanese) and other channels, so please feel free to join if you’re interested. ✨
アバター
1. Event Overview The fifth SRE NEXT event was held on July 11 and 12, 2025. As a platinum sponsor, our company exhibited at a corporate booth and spoke at a sponsored session. In addition to the many fantastic sessions, we were able to interact with many people at the sponsor booths and book corner, making these two days extremely valuable. This article features a roundtable discussion with KINTO Technologies members reflecting on their first time exhibiting at the event. 2. KINTO Technologies and SRE 2-1. What Kind of Organization Is It? KINTO Technologies is the Toyota Group's first in-house development organization and is responsible for the development, maintenance, and operation of systems for consumer mobility services, including the car subscription service KINTO. As of July 2025, the company employs approximately 400 engineers, designers, product managers, and other professionals, developing services for both internal and external users. Within this organization, the SRE Team is part of the platform group and works with product teams to maintain and enhance system reliability while supporting developers. 2-2. Current State of SRE As Osanai announced during the sponsor session on the day, KINTO Technologies has a well-developed cross-functional organization, with multiple teams sharing many of the responsibilities that would normally be handled by platform SREs at many companies, including cloud infrastructure engineers, DBRE, platform engineering, a security specialist team, and a team that works with CCoE and finance. Here is the presentation material from the day 👉 What Does SRE Do in an Organization with Segmented Roles? - Speaker Deck Two engineers promoting the practice of SREing are working closely with product development teams to implement best practices. Although they face challenges in linking service levels to business metrics and development processes, as well as difficulties applying platform patterns within team topologies, they continue to experiment and refine how to best deliver value. 2-3. Motivation for Exhibition KINTO Technologies launched a Developer Relations Group in 2022, and in 2023 elevated it into a Tech Blog "group" to enhance its communication efforts. In 2024, the company began sponsoring conferences. Recently, its CEO, Kotera, spoke at the Development Productivity Conference. The company has also sponsored conferences across various fields, supporting the engineering community. I believe the appeal of this community lies in conferences where engineers can communicate directly with one another, and I am glad to be part of this opportunity. KTC’s SRE Team is small and currently in a growth phase. We decided to hold a sponsored session, first to raise awareness of the SRE Team's presence, and then to share our unique challenges and efforts within KTC’s segmented-role environment, hoping they could serve as a reference for others facing similar issues. 3. Activities on the Day 3-1. Booth Operation We asked visitors to write on sticky notes under the theme “What’s Your ‘NEXT’?” Those who participated got to spin a gacha-gacha capsule toy machine and receive a novelty gift. We offered KINTO's mascot character Kumobii plush toys (large and small) and Toyota Tomica cars as novelties, which were very well received by everyone. Novelties Offered at the Sponsor Booth In just one day of operating the booth, we received so many "NEXT" ideas that the board was completely filled. It allowed us to experience this year's theme, "Talk Next," together with the participants. Many visitors wrote down their various "NEXT" ideas 3-2. Presentation As a sponsored session by our company, Osanai from the SRE Team gave a presentation titled "What Does an SRE Do in a Role-Fragmented Organization?” As this was his first time presenting at an external event, he seemed very nervous, but despite worrying daily, thanks to his hard work and diligent efforts, he was able to deliver a presentation he was satisfied with. Osanai keyed up at his first external presentation He was very nervous about what kind of reaction he would get after taking the stage, but luckily many people came to the Ask the Speaker session, and he was able to have fun talking to them, including some behind-the-scenes stories that he couldn't fit into the 20-minute presentation! A photo from Ask the Speaker 3-3. New Learning Our company has many young engineers, and many of our members are not used to participating in external events. This event provided many inspiring experiences for our young engineers and served as a very valuable opportunity to interact with renowned engineers, including Brendan Gregg, author of "System Performance." A young engineer thrilled to take a photo with Brendan Gregg Furthermore, many young engineers who began their careers in cloud engineering lacked knowledge of the technologies behind physical networks. At the venue, however, they had the chance to receive clear explanations about the roles of devices like routers and switches—an experience that directly enhanced their technical proficiency. Cloud engineers unfamiliar with physical networks being taught about routers and switches 3-4. Interactions with Participants As a sponsor, we participated with the goal of raising awareness of KINTO and KINTO Technologies among a broad audience. More than anything, the greatest benefit was the inspiration and learning we gained through our interactions with other participants. Organizing members chatting with visitors 4. Roundtable Discussion among Participating Members The organizing members, who had such an enjoyable two days, held a roundtable discussion to reflect on the event. SRE: Osanai and Kasai / Cloud Infrastructure: Kossy and Shirai / TDeveloper Relations Group : Yukachi Organizing members having a round-table discussion in the office 4-1. What Is the Most Memorable Thing? Kasai: “I was at the booth the whole time, so I couldn’t attend the sessions, but I spoke with several people who visited the booth, and I was struck by how many of them were struggling with how to incorporate generative AI into their SRE work.” Osanai: "I was really happy that there were people who were interested in my presentation and came to listen to it. Afterwards, some people came to talk to me directly at the Ask the Speaker event, and I was really grateful for that.” Shirai: "The biggest thing was the passion of all the participants to make the event a success. Since the theme was Talk Next, I really liked how everyone was sharing their know-how and speaking with mutual respect. I'm grateful to the organizing members for creating SRE NEXT, and I would love to participate as a member of the organizing team if I get the chance." Kossy: "What impressed me the most was the enthusiasm of the community. Some people listened to the sessions with deep focus, while others seemed to enjoy lively interactions in different places. I felt it was a really great space for people who struggle with the same themes in their daily work to share their experiences." Yukachi: "I think what stood out was the high level of communication skills of everyone who helped with the event! Each person's personality shone through, and it was great to see them enjoying themselves while working at the booth. I want everyone to check out the highlights I posted on X (lol)." ![](/assets/blog/authors/n.osanai/2025-07-18-sre_next_look_back/yukachi_x.jpg =500x500) 4-2. How Was Your First External Presentation? Interviewee: Osanai-san Osanai: "The last time I presented in front of people was probably at a piano recital in elementary school... (lol)." Kossy: “What motivated you to take the stage this time? Did you have something particular you wanted to share with everyone?” Osanai: My main motivation at first was to raise awareness about KTC's SRE Team. So I started thinking about what I should talk about to achieve that, but when we got the sponsorship, nothing really stood out to me as “this is it!” Still, once I knew I’d be speaking, I wanted to share something that would resonate with the audience. That’s when ideas like the improvement tools I mentioned in the presentation started to emerge, and from there, the outline gradually took shape. Once that was decided, I started gathering additional information to fill in the gaps in what I wanted to talk about, while also connecting it with the things I had done up to that point. The opportunity to speak gave me some idea of what lies ahead for us and helped me grow so much.” Kossy: "At the booth, many people said that KTC's presentation was great. What kind of questions did you receive during the Ask the Speaker session?" Osanai: "We discussed various topics, including how the New Relic Analyzer I mentioned in the presentation works, and efforts to improve the accuracy of Devin's suggestions. We also talked about challenges raised by the attendees." A person I used to work with also came by, and we had a great time reminiscing about those days and catching up on each other's current lives.” Kossy: "There were questions from participants from companies struggling in similar areas, such as how we approach obstacles that prevent us from taking action." Osanai: "That's right. I realized that everyone has similar challenges." Yukachi: "By the way, the person who was sitting next to me during the presentation turned out to be someone who had visited the booth on the first day. After the presentation, I spoke with that person and found out that this person lives in Fukuoka, and that a Fukuoka office opened in July. From that conversation, I was able to invite the person to an event to be held in Fukuoka." I was really happy that the person became interested in our company because of Osanai-san's presentation!” Osanai: “Two days after SRE NEXT, some candidates came for interviews, so I think the presentation helped them better understand KTC.” Kossy: "It was your first time speaking at an external presentation, so you must have been nervous, but was there anything in particular that you hadn't anticipated or hadn't imagined?" Osanai: "In fact I expected to be so nervous that I wouldn't be able to eat anything for a few days before the event, but surprisingly, I wasn't that nervous. I realized I could eat quite well after all." Yukachi: "I think it’s because it was your first time, and you prepared thoroughly." Osanai: "That may be true. Surprisingly, I could see you all clearly from the stage and I even made eye contact with Kumobii on Kossy-san’s headband, making me speak in a relaxed way." However, when I saw the photo, I had a really stern look on my face, and I thought, "Wow, that's what I looked like... (lol)" Kossy: "Right before the presentation, your eyes were really bloodshot." I honestly thought Osanai-san would be able to speak just fine, but with everyone hyping it up, he seemed a bit nervous, and right before it started I found myself getting nervous too (laughs). But surprisingly, he was really steady, and the content of his talk had a lot of things that were valuable for us as the neighboring team—like, ‘wow, that approach is amazing. Yukachi: "One thing I regretted that day was not giving Osanai-san a pat on the back before the presentation (lol). I wouldn’t worry if it were Kossy-san or Shirai-kun speaking, but since it was Osanai-san’s first external presentation, and his face looked so tense, I was really worried (lol). But once he started, he was solid, and honestly, I was kind of moved (lol)." Kossy: "All in all, it was a success!" Osanai: "The next task will be managing my facial expressions (lol)." 4-3. Talk Next — What's Next? Kasai: "Right now, I'm working on an improvement tool, and I definitely want to see it through. I think the process will give me even more to talk about, so I'd love to share that externally as well." Osanai: "I personally want to improve the quality of our improvement tools and the accuracy of their suggestions. But these tools won’t get anywhere unless people are actually interested in using them. So while continuing development, I also want to promote them to different product teams. We also concluded that trying to decide on service levels among engineers alone didn't work. Moving forward, we want to be able to have conversations with the business side as well—discussing things like what level of quality is actually needed." Through this event, being involved in things like running the conference made me want to expand my network with all kinds of people, and it also gave me the motivation to build a platform that’s easier to use from a developer’s perspective." Yukachi: "This was Shirai-kun's first time staffing the booth, and hearing you say that makes me really happy—it feels like it was a great opportunity for Shirai-kun!" Awata-san and Kossy-san have many acquaintances in the SRE community, and this time, I felt that many people already knew about KTC. By everyone expanding their networks like this, KTC’s recognition will grow, and above all, the more people you know, the more enjoyable it becomes to attend conferences. So, I hope more people will actively participate and get excited about these events!” Kossy: "I want to help build a stronger community with people outside the company, and to do that, I want to foster a culture and put best practices into action within our organization.” 5. Summary 5-1. What We’ve Learned At SRE NEXT this time, we learned the following through presentations by each company and interactions with the participants: Many companies share similar challenges, but there are as many approaches as there are companies, and even similar approaches can produce different results. It is important to consider the SRE approach not only from an engineering perspective, but also from a business and organizational perspective. There were many comments about how building trust with the product team has a big impact on SRE activities, and we were reminded of the importance of daily communication. 5-2. Next Challenges of KTC's SRE With this in mind, KINTO Technologies’ SRE Team intends to take on the following challenges: Further development and promotion of improvement tools Establishment of valid service levels in collaboration with the business side as well Sharing gained insights both inside and outside the company to contribute to community activation 6. Conclusion Many thanks to SRE NEXT organizers as well as those who stopped by our booth, attended our sessions, and connected with our team. It was our first time sponsoring SRE NEXT, and it turned out to be a truly valuable experience. We’ll continue practicing and experimenting with SREing and look forward to future opportunities to share what we’ve learned! Organizing members who participated from KINTO Technologies Recruitment KINTO Technologies is looking for people to join us in building a mobility platform. Please feel free to visit our recruitment website below! 👉 KINTO Technologies Corporation Recruitment Information
アバター
Introduction I'm Nakamuraya from the Creative Group of the KINTO Unlimited app. We've recently decided to implement sound into the app, so I'd like to share the process and concept behind it. A business team member on the KINTO Unlimited project casually asked if we could add sounds that make users ”feel so good they end up continuing to use the app”—leaving it entirely up to the Creative Group! There are lots of apps that have sounds built in. Apps that feel good seem to have stylish, well-designed sounds too, don’t they? Just as I was thinking, “Hmm, maybe we could involve that sound designer or artist...” , the dream was cut short—it turned out we didn’t have the budget for such originality. I had envisioned something big, so it was a letdown. But since I couldn't compromise on quality, I decided to use a paid service called Splice ( https://splice.com/sounds ) after looking into various sound services known for their high quality. Although this was a side project to my main work and an area where I didn’t have much experience, if you’re curious about any of the following, take a look at this blog: How should we choose and assemble from such a huge number of sounds? What’s the design process like up to implementation? How is the Creative Group involved in development? What is the Sound World of Unlimited? First, the direction. This is a key part that greatly influences later stages. It is also necessary to verbalize the app’s sound concept, set criteria for sounds, and avoid spending excessive time. Define the scope: As this is experimental, implementation will be limited to a minimal, specific set of experiences. We will focus on feedback sound effects (SEs) for user operation, as well as on background music. The Unlimited service is a new way of owning a car, where your car is upgraded with new technology after purchase, and the keywords are futuristic, innovative, optimized, smart, and secure . We reflect these “keywords” in sounds. The sounds that come to mind here are " modern and comfortable digital sounds that blend into the environment " ( hypothesis ). To avoid narrowing the scope of ideas and expressions, we develop hypothetical concepts at a level that allows flexibility. We search for sounds while imagining a balance between calming elements and a cool, crisp feel. And soon, we realized this wouldn’t work. There's no way that a group of sounds chosen by someone who isn't a sound professional based on instinct could be harmonious and consistent. However... we found it! A method that ensures quality and efficiency. Splice offers sound packs , including packs for apps such as games and UIs. So, we chose a sound pack with modern, sci-fi elements and a comfortable feel, and began selecting sound candidates. We then use Adobe Premiere Pro to try out sound effects on app operation videos and further narrow down the candidates. :::message Tips: Sound packs that include the name of the sound designer in the credits are particularly good! I felt their concepts are consistent and clear, the sound quality and volume are stable (normalized), and they are easy to implement without any unnecessary adjustments. ::: Change of Direction Rather than focusing on perfection, we asked project members to listen to variations early on and provide their opinions on the direction. The basic feedback was positive, but one comment caught my eye: “They’re good, but maybe they should feel a bit more common?” This feedback came from a member deeply involved in the app and service. I felt that the Creative Group should pick up on this feeling and interpret the discomfort that cannot be put into words. “Common” means ordinary, unrefined, and conventional, but we do not take it literally; instead, we interpret it from a design perspective. We interpret that sophisticated, futuristic sounds are not suitable → they are not the value to be provided to users → and we should align more with real users rather than the vision to evoke empathy. As mentioned at the beginning, "To promote continued app use," the app has implemented measures such as beginner content and gamification, promoting usage by focusing on real users rather than offering one-sided value. We realized that the original concept we had in mind was not wrong, but that the app concept was gradually changing and that an update was necessary to keep up with it. We then redefined the sound concept as the provision of an experience that makes the latest technology feel familiar, offering a sense of security as we grow together . Here is a sample of the sounds we reworked based on this concept. https://www.youtube.com/watch?v=oeGNNqRJs50 Familiar sounds reminiscent of one’s own memories, with a playful touch that might become addictive—don’t they evoke that kind of image? Before Implementation We hand over the finalized sound data to the engineers and leave the rest to them! But that's not the end of it. The design phase from here on is also very important in shaping the user experience. For example, it's very pleasant when the sound closely synchronizes with the visual changes that punctuate the animation (e.g., a sound being made the moment a coin flashes). Conversely, if there is a mismatch here, it creates discomfort and causes stress. When it comes to the sound effect (SE) played when a button is pressed, if it sounds exactly at 0.00 seconds the moment it is pressed, it gives a stiff impression. A slight delay of several tens of milliseconds provides a more natural and refined feel. * The approach varies depending on the theme. Based on this way of thinking, we compile specifications to ensure reproducibility of where, when, and how sounds are played. (First, without overthinking feasibility, we incorporate the ideal image of the user experience.) Since this is not a specialized sound app, we avoid delving into technical concepts and compile the implementation specifications as follows: Management ID / Sound file name / Target screen Playback trigger: Clearly specify what user action or event causes the sound to play, such as “When tapping the 〇〇 button” or “When displaying the △△ animation.” Presence/absence of loop playback Volume: Design the volume based on the meanings and relationships of the sounds, such as keeping background music and cancel sounds modest. Delayed playback: This allows playback timing to be adjusted relative to the trigger, keeping the trigger logic simple. Fade-in: This can adjust the start of the sound and help avoid conflicts between sound effects and background music. Fade-out: Stopping the background music with a lingering sound, rather than cutting it off suddenly, creates a more polished impression. Note: Describe the intention behind the playback timing clearly to avoid any confusion. The following is regarding data. The devices on which the app is installed belong to the users, so we must be mindful of the app's size to avoid placing a burden on user devices. The following data specifications are not the highest quality but are set at a sufficiently high level. SE: WAV format or AAC format* BGM: AAC format *For important sounds (brand SEs) and frequently used SEs, WAV is recommended. For SEs exceeding 200KB and longer than 1 second, consider AAC. Basic standard after AAC compression: Stereo source of 256 kbps variable bit-rate (VBR), sampling rate of 44.1/48 kHz. Since sound effects are intended to be played instantly, WAV (uncompressed and highest quality) is suitable as the data is played as is, while AAC (compressed) requires a decoding process for playback, which causes a slight delay. * With recent smartphone processing power, such a slight delay is unlikely to be noticeable except to professionals. In addition to this, there are other items that need detailed definitions, such as audio interruptions and preloading (prior memory read). We will share these with the producer and engineer to an appropriate extent and refine the details together. This is an advantage of in-house development―you can move forward together with knowledgeable people before worrying about things you don’t understand. Conclusion Although these are part of the development details, I will end here for now as a milestone. The reason we were able to make this much progress in this unfamiliar area was the use of AI, including ChatGPT. I was able to identify the necessary perspectives, use AI as a sounding board, and deepen my thinking until it became convincing. However, no matter how much I dig into sound theory, there seems to be no bottom in sight. Therefore, it was important for me to define sounds in a way that would allow for a common understanding within the company . We are careful in creating specifications and communication that are easy to understand within the project without being overly technical. (For example, instead of using dBFS values for volume, we set a reference point and express it as a relative scale value, defining it as an easy-to-understand number between 0.0 and 1.0.) Still, sound is very deep, and I know there's a lot missing here. Furthermore, music is a mass of sensibility perceived differently by each person—or more precisely, depending on their mental state at the time. I introduced the process of incorporating these types of things into the user experience. Finally, at KINTO Technologies, the concept of Minimum Viable Product (MVP) is well established, so once we gain support, we can quickly build an idea and proceed with development. We can then repeatedly update while monitoring user feedback. This is just one example, and I hope it gives you a glimpse into how the Creative Group is involved in such development. Thank you for reading through to the end.
アバター
#iwillblog → #ididblog Hello everyone. My name is Koyama and I am in charge of iOS in the Mobile App Development Group. I attended iOSDC 2023, so I would like to share my experience, albeit belatedly. Two of us from our company—GOSEO and I (Koyama)—will each share our experiences. This year, a member of our Tech Blog team who is not an iOS engineer also participated. An article from the member’s operational perspective has been compiled in A Report (Management Perspective) on Participation in iOSDC Japan 2023 , so be sure to check it out! Last year's iOSDC participation report is also available in #iwillblog: iOSDC Japan 2022 Participation Report . Part of KOYAMA This was my first time attending iOSDC in person. I would like to summarize what was presented at the booths of various companies and my impressions from listening to each session. On-site Booths Over the course of three days, I was able to visit most of the booths and hear many stories from fellow iOS engineers actually working in the field. As an iOS engineer, I especially enjoyed LINE's code review challenge and DeNA's mental SwiftUI rendering. In particular, I took on the challenge of solving the mental SwiftUI quiz because I regularly practice working with SwiftUI. However, I couldn't render components I had never used before, and to my chagrin, it was a crushing defeat. (That said, I enjoyed learning a lot from the experience.) I also enjoyed AR makeup at the ZOZO booth. It was a refreshing experience to witness how quickly facial feature recognition could be achieved. It seems that a bright red lipstick suits me far too well, which was a surprising new discovery (?). Because it suited me too well, I covered my face just a little. There were many sponsors who prepared various novelties, and among them, Findy and dip were holding prize lotteries side by side, so I went to try my luck there as well. However, the result was another crushing defeat. Within the one-challenge-per-day limit, I was especially unlucky with Findy’s lottery, drawing “great misfortune” both times I tried. So regrettable··. (Many people drew “excellent luck) before and after me.) It seems that drawing the great misfortune lottery two days in a row is also rare. Wait, am I just enjoying the event? Sessions I Attended Of course, I also attended the main sessions. Among them, I would like to comment on the sessions that caught my particular attention. Getting a Complete Picture of Privacy at Apple One of them was a report on privacy by @akatsuki174 . Apple’s OS controls access to various information such as camera access and location information. Thus, you won’t accidentally access any unintended or inappropriate information. This rigor is one of the reasons I love iOS development. The above-mentioned privacy-related items are often closely checked during the App Store review process, so as an engineer, it's important to keep up with them. One session that particularly caught my attention was about the permission status related to location information. Obtaining location information using CLLocationManager , for example when you want to "always get location information," requires requesting permission in stages, which I had never heard of before. The official documentation states as follows: You must call this or the requestWhenInUseAuthorization() method before your app can receive location information. To call this method, you must have both NSLocationAlwaysUsageDescription and NSLocationWhenInUseUsageDescription keys in your app’s Info.plist file. I see, so in order to constantly collect location information ( requestAlwaysAuthorization() ), you must first obtain permission ( requestWhenInUseAuthorization() ) while the app is in use. It was a function I'd vaguely seen before, but this was the first time I'd learned how it worked, so it was a very educational experience for me. Personally, I liked the funny sight of Akatsuki-san, who appeared on stage for the recording that day, talking while only the head of a mannequin was projected. LOL Everything about iOS App Development Completed Only on an iPad This session was part of lightning talks (LTs) and focused on developing iOS apps solely on an iPad—no matter what. It concluded that it would be possible, but the fact that GitHub couldn’t be used was pointed out as a major issue—and I felt exactly the same. However, the fact that app development is now possible to some extent even without a MacBook really shows how much times have changed. I thought it was great news for engineers to be able to develop iOS apps anytime and anywhere. How to Fight When You Are Accused of a Developer Program License Violation You Didn’t Commit and Your App’s Search Ranking Is Lowered Another interesting session from LTs. It was a sad story of how the developer created an app that saw a sudden increase in the number of accesses at certain dates and hours, but Apple suspected fraud and lowered the app's search ranking, and the developer is still fighting the issue to this day. Given the nature of the app, I understood why the accesses would significantly increase on Setsubun day, and I also fully understood why Apple would view that as a potential risk. However, the fact that Apple hesitates to respond to the developer’s inquiries seems to be a difficult issue to resolve. This was a case of individual development, but since similar patterns can occur in apps developed by companies as well, I gratefully took it as valuable insight for the future. KOYAMA’s Summary That concludes Koyama's part. The festive atmosphere at iOSDC was fantastic! I couldn’t participate for the full three days this year, but I strongly resolved to attend all days next year. I was also able to talk directly and take photos with people from the iOS community that I'd seen on X (Twitter), so in that respect it was a very satisfying event. Part of GOSEO This was my first time attending iOSDC online. I compiled feedback on the sessions I had planned to watch before the event, after actually watching them. Luxurious Novelties While everyone else was saying they had gotten their novelties, I was eagerly and excitedly waiting for mine. I had made a mistake with the address I registered, and the event organizers contacted me to say they couldn’t deliver the novelty item. I apologize to the organizers for the trouble I caused. After successfully receiving the novelty item—a small cup—I’ve been using it with care (only on the days I go to the office). Luxurious novelty box A mug just right for use at the office Sessions I Attended Exploring the Black Box of UI When I heard that the quality of custom UIs tends to be lower than those provided by the OS, but that under certain conditions, custom UIs become necessary, I could really relate—it made me realize that creating custom UIs is something many engineers go through. I've also heard that not all custom UIs are bad, and that their quality can be improved by adhering to the HIG and analyzing the UIs provided by the OS. I'll keep this in mind in future implementations. The speaker also explained where to focus when analyzing, emphasizing the importance of analyzing HIG elements on the screen to discover patterns in the UI. The speaker also said that it's important to implement UI that feels natural and intuitive to the user. The speaker said that by implementing behaviors that feel natural and familiar to users, the app becomes more user-friendly and reduces any sense of discomfort during use. What I found most impressive was the tool for analyzing the UI of the published app itself. The View Hierarchy Debugger is a well-known tool among iOS engineers, but it has the limitation that it can only be used on apps running locally. The speaker introduced the tool Frida, saying it can be used to investigate the UI structure of apps like Maps and analyze UI structures that cannot be seen on the screen. The speaker also kindly explained how to set it up, which really motivated me to try it out. Technology for More Accurate Passport Scanning in Travel Apps ~ MLKit / Vision / CoreNFC ~ The speaker compared and explained MLKit and Vision in terms of their compatibility with SPM, ease of implementation, and OCR accuracy. The speaker judged that both implementation and OCR accuracy were at comparable levels between MLKit and Vision, but Vision seemed to be superior in terms of compatibility with SPM. After that, the speaker explained how to implement reading passport characters using Vision. Specifically, the speaker explained how to use the passport's NFC to supplement OCR reading errors. There was also an introduction to how to implement NFC, making it a very informative session. GOSEO’s Summary That concludes GOSEO’s part. It was a great iOSDC, where I was able to come into contact with knowledge I don’t usually deal with or notice. I definitely want to participate again next year. It was a wonderful event that allowed me to realize things I didn't know, such as my current position and the direction I should aim for. Conclusion This concludes the series with the hashtag #ididblog ! My post was delayed, so next year I want to be able to publish earlier. I can't wait for next year's iOSDC 2024!
アバター
Nice to meet you. I'm Aoshima, a UI designer at KINTO Technologies. I usually handle the UI for business applications. A little while ago, we ran a usability test to see how customers use our website, aiming to use the insights for our site redesign. It was also something of a trial test for us, so we kept things small by recruiting participants from within the company. Even so, we were able to collect data that offered plenty of valuable findings. In this article, I'll share the outline of the test and tricks we used to carry it out. What Is a Usability Test? First of all, despite the name, it isn't about passing or failing. It's a method for evaluating three key factors that are essential to the concept of usability. So first, let's take a look at the general idea of usability. The Definition of Usability The term "usability" is often used in a broad or vague way, typically to mean how easy something is to use. However, it actually has a clear definition set by the international standard ISO 9241: "The extent to which specified users can use a product to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use." The three key elements of effectiveness, efficiency, and user satisfaction can be explained as follows: Effectiveness: whether users are able to achieve their goals. For example, on an e-commerce site, can users successfully complete a purchase? Efficiency: whether users, assuming they can achieve their goals, are able to do so via the shortest possible path, without unnecessary steps. Satisfaction: the degree to which users can operate the product comfortably and without frustration, even if there are no major problems with effectiveness or efficiency. For example, on an e-commerce site, if users can't complete a purchase in the first place, usability is either very low or completely absent. Even if users are able to complete their goals, usability is still low if it takes too many steps, like having trouble finding what they're looking for. And if the experience is frustrating or unpleasant for any reason, user satisfaction drops and so does overall usability. Leaving those issues unaddressed could lead to losing your important customers to competing products or companies. To prevent this from happening, it's essential to understand how customers behave when handling your product or using the website, and to take usability into account. Usability Test: What It Can Do and What It's For The foundation of a company's profit-making activities is ensuring that customers are happy to use its products and services, rather than feeling uncomfortable, so if there are any problems with the products or services, they should be improved. The first step is to identify those problem areas. Fortunately, we already had access to a customer survey conducted by our analytics team. The survey included targeted questions about which parts of the website users found confusing, and we used that feedback as a key reference point for our usability test planning. If you can observe how customers behave at those problematic points, it can give you clues as to why those areas are causing trouble in the first place. Gathering this kind of hints is what usability testing can do. To put it another way, surveys are like the English tests you'd take in school. They're good at pinpointing what went wrong, like whether it was listening, grammar, or something else. Usability testing, on the other hand, is better at uncovering why something went wrong and what could be done to improve them. Preparation Before the Test Setting Tasks and Conditions As a first step, we prepared tasks and conditions based on discoveries from a prior survey. The results showed that users across all age groups had trouble understanding certain areas of the website. So, we focused on those points and designed tasks to evaluate two key aspects: effectiveness and efficiency. Setting these tasks and conditions was important for two reasons. The first point was that if we let participants explore the site freely, they might complete the session without ever encountering the problematic areas. Setting tasks helped prevent that. The second point was to have participants with varying levels of digital literacy perform the tasks under the same conditions. Prepare a Script, Questions, and Surveys Next, we prepared three key materials: a script to explain the test to participants, a question sheet to understand their background and digital literacy, and two post-test surveys to be filled out after the session. To help participants feel comfortable taking part in the test, it was important to clearly explain the content and flow of the test beforehand. We also asked questions to better understand each person's background and level of digital literacy, so having a script helped ensure that everything was explained clearly and the test ran smoothly without missing any steps. Ice-breakers and other little things can sometimes take up more time than expected, so it's a good idea to set a rough timetable for the session if possible. The post-test surveys were designed to measure the third and final key point mentioned earlier: satisfaction. For this, we used two metrics called Customer Satisfaction Score (CSAT) and System Usability Scale (SUS). CSAT is commonly used in customer satisfaction surveys and measures how satisfied users are on a five-point scale. SUS, on the other hand, measures how users perceive aspects like ease of use and difficulty. It's widely used as a standard metric for evaluating overall UX. One reason SUS is especially useful is that it comes with a clear benchmark. If the score is below 68, it's a sign that usability needs to be reviewed, which makes the system easy to understand and practical. Device Setup As a final preparation, we prepared a smartphone and a laptop to record the participants' facial expressions and on-screen actions during the test. We used the smartphone to film hand movements and the laptop to record facial reactions, setting up both ahead of time. Once the test began, we logged into Microsoft Teams on both devices and used the built-in recording feature. This function is extremely handy because it automatically saves the recordings to the cloud and combines them into a two-screen layout, making review much easier. By the way, we used a smartphone stand from a 100-yen shop. As a side note, we used two separate video cameras, one for filming hand movements and the other for facial expressions some years ago. The footage had to be saved locally and edited manually to create a synchronized split-screen view for comparison. Thinking back on that process, I was genuinely impressed by how much easier testing has become in just the past few years. Carrying Out the Test Once all the preparation is complete, it's finally time to run the test. After a brief ice-breaker and some explanations and questions, we moved on to the task execution using the website. The test was conducted using the think-aloud method. In this approach, participants are asked to verbalize their thoughts as they perform an operation. By combining flat visual data with spoken thoughts as audio input, it allows us to understand their behavior from all angles. Things to Watch Out for During Testing There are two things you need to keep in mind during the test. First, because participants are not used to speaking out their thoughts while performing tasks, the interviewer needs to consistently prompt them to share what they're thinking to prevent silences. Another point is that participants often ask the interviewer questions during the test, but it's best to gently avoid answering them as much as possible (though not ignoring). During the pre-test explanation, we made it clear that the purpose of the test was not to evaluate how well the participant could use the website, but to assess how easy or difficult the website was to understand. Even so, when subjects felt unsure, they often ended up asking questions instinctively. However, answering those questions could introduce bias, so it was important to judge carefully whether a question was appropriate to respond to. Once the tasks were completed and the questionnaire filled out, the test came to an end. Preparation for Analysis After the test, the next step was preparing for the analysis phase. It would be nice to take a breather after wrapping up the test, but this was actually where the more time-consuming work began. The first thing to do was transcribe the text. For Information Sharing The audio recordings from the test were about 20 to 30 minutes per participant but transcribing them took quite a bit of time since we often had to rewind and replay unclear parts. This might have been the toughest part. That said, converting time-based audio into plain text made information sharing much easier. For the sake of future analysis and collaboration, this was a step worth sticking with, even if it required quiet persistence. (The automatic transcription tools still felt far from reliable at the time.) The next step was to categorize and tag the spoken content to make it easier to organize. We first compiled everything chronologically in a spreadsheet, then copied it into a tool like miro. This allowed us to get an overview of multiple users' behavior and organize insights from various angles. If you want to take information sharing a step further, you can also create short, edited clips of the test footage with subtitles, making it easy to share what happened during the session. If time allows, it might be worth putting in the extra effort. In our case, we had only five participants, which made it manageable enough to go that far, but it was still a very time-intensive and demanding process. From Analysis to Improvement Normally, we would have a group of people analyze the data and use it to make improvements. However, since this was more of a test run to validate the testing process itself, I simply wrapped up my own observations into a report and left it there for the time being. Gather stakeholders in a number sufficient to hold a discussion, exchange opinions, and consider the matter. By going through this kind of process, I believe it becomes possible to move forward with improvements based on a shared understanding. Everything written here has been preparation for reaching that point. Lastly In this article, I wrote about a test we conducted on a small section of our website, which is just one part of the entire service. Even for testing such a limited scope, a great deal of time and preparation was required. But I believe that these small, steady efforts accumulate and ultimately lead to a better experience for our customers. To keep up with the changing world and our customers' needs, we hope to do our best to support the website' growth. I'd be glad if any part of this content is helpful for those planning to run their own usability tests.
アバター
Introduction Nice to meet you. I’m Kondo, the manager of the Owned Media & Incubation Development Group at KINTO Technologies. Our group name is so long that no one in the company ever says it correctly. So please feel free to call us Media Incube G . In this post, I'd like to introduce what our group is all about. Group Overview Establishment Media Incube G is a newly formed group that was established in August 2022. Originally, we were part of the KINTO Development Group, which was solely responsible for developing the customer-facing website for KINTO ONE, our subscription-based car service . As our group gradually grew, it became necessary to provide more focused management for each sub-team. In response, the original group was split into two in August 2022. One became the KINTO ONE Development Group and the other became our group, the Owned Media & Incubation Development Group . Products We Handle Here are the main products that each group is responsible for developing. KINTO ONE Development Group Product Overview URL KINTO ONE Develops features for onboarding to the new vehicle subscription service and providing aftercare support. https://kinto-jp.com/customer/login Owned Media & Incubation Development Group Product Overview URL KINTO ONE Produces content for the top page of the new vehicle subscription service, including vehicle listings, terms of use, and landing pages. https://kinto-jp.com KINTO Magazine A media website that provides MaaS-related information from KINTO. https://magazine.kinto-jp.com Mobility Market A service website where users can discover the joy of new forms of mobility. https://mobima.kinto-jp.com Prism Japan An AI-powered app that provides inspiration for places to go. https://ppap.kinto-jp.com/prismjapan/index.html Used Car Product A new mobility service from KINTO focused on used cars. - Dealer Product Develops sales promotion tools for KINTO ONE, designed for Toyota dealership staff. - Mission Our mission at Media Incube G is to deliver the value of KINTO to our customers to the fullest by leveraging the power of technology and creativity in both owned media and new business creation . As our group name suggests, we focus on two main pillars: owned media (our in-house digital media) and incubation (supporting the creation of new businesses) . Owned Media (Our In-House Digital Media) We create media that effectively reaches customers with the value of KINTO's mobility services and products. Relevant Products: KINTO ONE (user-facing content), KINTO Magazine, Mobility Market, Dealer Product Incubation (Supporting the Creation of New Businesses) Together with KINTO, we create and support new mobility services that follow in the footsteps of KINTO ONE, using technology to bring them to life. Relevant Products: Prism Japan, Used Car Product What We Are Working On and Aiming For Quality Assurance Initiatives The user-facing content for KINTO ONE provides customers with essential information during the contract process. In line with frequent business updates, such as new vehicle listings or service changes, our average delivery span is one week. Depending on the timing, we sometimes deliver even faster. Under such conditions, we must maintain a certain level of agility while still ensuring content quality. That’s why we are constantly exploring and implementing initiatives to enhance quality assurance. Here are some of the measures we have implemented: Automatic Checks in Our CI/CD Pipeline We have an in-house QA team, and when requested by product teams, these testing professionals can conduct quality checks. However, due to business constraints, there are cases where content or materials cannot be fully prepared in time for QA testing. In such cases, how can we still deliver without missing anything, even in situations like this? Here’s the approach we’ve taken: When a change is needed, we first commit a temporary version containing a specific dummy string. Once the final content is ready, we replace the dummy text and deploy it to the test environment. If the content is ready in time for QA testing, the QA team checks it on the assumption that it is finalized. If the content is not ready in time, we inform the QA team which parts are still dummy text, and they test it with that understanding. We have set up a test job in GitHub Actions to check for the presence of specific dummy strings. This check is triggered when merging into the main branch. This allows us to pass QA testing while also preventing dummy content from being accidentally deployed to production. Pair Programming Required for Resolving Merge Conflicts Since wide-ranging content updates are often made within a short time frame, merge conflicts occasionally occur. In such cases, we have established a rule that conflicts must not be resolved by a single person. Instead, multiple members must work together through pair programming, viewing the same screen to confirm each change as they go. Additionally, even for pull requests without conflicts, we have configured GitHub to prevent merging unless at least one reviewer approves the changes. Skill Development Media Incube G is made up of members with diverse backgrounds and skill sets. However, it can be difficult to know what members outside of your own product team are working on, or what kinds of challenges they’re facing, just by going through your daily tasks. To address this, we set aside time outside of day-to-day work for skill-sharing sessions and technical knowledge exchanges among team members. Study Sessions Most recently, we’re planning a "Design System + Atomic Design Study Session" led by front-end engineers. Since back-end engineers don’t often get the chance to explore these topics in their day-to-day work, they seem to be looking forward to it. Technical Exchange Meeting We’re also planning a technical exchange meeting with participation from all front-end engineers, including members of the KINTO ONE Development Group. Those attending in person will enjoy snacks and coffee, and the event will also be available online so remote participants can join as well. Teams and Members in the Group As mentioned earlier, Media Incube G handles a variety of products. Within the group, we are organized into three distinct teams. We'll introduce each team next. For clarity, we’ll refer to them based on the main product they handle. Please note that these may differ slightly from their actual team names. 1. KINTO ONE Team Members (as of December 2022): 8 How the Team Works As engineers working closely with the business side, our most important role is to understand the KINTO ONE business and determine how best to shape it through our systems . Rather than simply doing what we’re told, we aim to fully understand each task ourselves, ask questions whenever something is unclear, and move forward based on our own judgment regarding the scope of impact and the optimal logic. Team Atmosphere The team has a strong desire to learn. While individual skill levels vary, no one is content with the status quo. Many of the team’s half-year skill development goals are quite ambitious. Here’s a blog post by our tech lead, who helps drive this culture of continuous learning across the team: Insights from using SvelteKit + Svelte for a year 2. Used Car Team Members (as of December 2022): 4 How the Team Works Due to various reasons, we can’t share too many details about this product. However, one notable aspect of how we work is that we function as one team, where everyone is encouraged to share their opinions regardless of department or company . We hold seven weekly recurring meetings, each focused on a specific theme, and also meet weekly with the KINTO business side. In addition to those meetings, we also communicate regularly through Jira and Slack. Team Atmosphere Our product manager and lead engineer actively engage with the business-side product owner. To be honest, not every engineer has been able to keep up with that pace, but each of us is working to deepen our understanding and improve our skills within our respective roles. Actually, I’m the lead engineer myself. I joined the company in September 2021, and this project kicked off right after that. I've been serving as the lead engineer ever since. That said, as of November 2022, we’ve added more engineers to the team, and I believe we’re now entering the skill transfer phase. I’m looking forward to seeing a new leader step up and guide the team forward. 3. Prism Japan Team Members (as of December 2022): 5 How the Team Works The team consists of one project manager, one product manager, and three engineers. Prism Japan was launched in August 2022 and is now in the operations phase. We’ve adopted an agile approach for the operations and refactoring phase , and to support this, we’ve assigned a dedicated QA member from our QA team. Team Atmosphere This is a team where everyone takes ownership and works independently. There’s a strong sense of mutual respect among members, and they each play to their strengths while supporting one another across areas of expertise. I often hear lively discussions right behind me, as the team frequently exchanges ideas about challenges and potential improvements. We’re Looking for Teammates Like You 1. KINTO ONE Team Product Manager (PdM) We’re working to build a PdM team that can define and propose the ideal form of the product from a system development perspective. Click here to apply for the Product Manager (KINTO ONE Team) position Front-end / Back-end Engineers To bring our ideal product vision to life, we also need engineers who can build it with their own hands. We’re looking for members with the potential to eagerly learn new technologies, understand KINTO's business, and grow alongside KINTO as true partners. Click here to apply for the Front-end Engineer (KINTO ONE Team) position Click here to apply for the Back-end Engineer (KINTO ONE Team) position 2. Used Car Team Front-end / Back-end Engineers We’re looking to grow our team with engineers who can deeply understand the used car business and work hand-in-hand with KINTO to drive system development forward. The knowledge and experience you gain here will not only benefit current products but also play a vital role in shaping KINTO’s future services and offerings. In other words, working on this used car product gives you the opportunity to become an engineer who makes a significant contribution to the value of both KINTO and KINTO Technologies. Click here to apply for the Frontend Engineer (Used Car Team) position Click here to apply for the Backend Engineer (Used Car Team) position 3. Prism Japan Team Back-end Engineer If you’re interested in contributing to the development of a native app and helping to pioneer a new mobility market, we’d love for you to join us. Click here to apply for the Backend Engineer (Prism Japan Team) position
アバター
はじめに こんにちは、2025年7月入社のhidenoriokaです! 本記事では、2025年7月入社のみなさまに入社直後の感想をお伺いし、まとめてみました。 KINTO テクノロジーズ(以下、KTC)に興味のある方、そして、今回参加下さったメンバーへの振り返りとして有益なコンテンツになればいいなと思います! hidenorioka ![hidenoriokaのプロフィール画像](/assets/blog/authors/hidenorioka/hidenorioka.png =300x) 自己紹介 KINTO ONE開発部 新車サブスクFE開発グループでWebフロントエンド開発を担当しています! 所属チームの体制は? 東京・大阪・福岡と、複数の拠点に所属するフロントエンドエンジニア8名で構成されています。 現場の雰囲気はどんな感じ? プロジェクトを推進することでサービス成長させることは勿論、プロダクトの品質改善や開発体験の向上まで主体的に提案・コミュニケーションできる環境だと思います。 質問や相談事はチーム内で日常的に会話されているので、とても気軽にコミュニケーションが取れています! KTCへ入社したときの入社動機や入社前後のギャップは? 入社前までクルマやモビリティ業界には全く縁がなかったのですが、KINTOの新車サブスクを初めて知った時に「こんなサービスがあるんだ!」と驚きました。更なるサービスグロースに自分も関わってみたいと思ったのが入社のきっかけです! 事前にチームメンバーの方とお話しする機会があったり、外部メディアへの発信も多くあったので、入社前後でギャップはありませんでした。 オフィスで気に入っているところ 最寄りの三越前駅からオフィスまで地下直通なので、暑い日も雨の日も快適に通勤できるのが地味に嬉しいです。 K.S.さん ⇒ hidenoriokaへの質問 室町で働くようになって感じる良いところを教えてください。 オフィスがある日本橋・室町エリアにはランチスポットがたくさんあるので、お昼に散策するのが楽しいと思います! S.N ![S.Nさんのプロフィール画像](/assets/blog/authors/hidenorioka/2025-09-12-newcomer/sn.jpeg =300x) 自己紹介 新サービス開発部で中古車をメインで担当しております。 前職はバックエンドエンジニアからPMをしておりました。 アイコン画像はペット(黒柴:おはぎ♂、猫:つゆ♀) 所属チームの体制は? 私が担当している中古車ECサイトは現在、KINTO(新車)でご利用いただいた返却車両を、中古車として再掲載しお客様にご利用いただくwebサービスでございます。 中古車のチームとしては私含め9名ですが、部では30名近いメンバーが在籍しております。 現場の雰囲気はどんな感じ? コミュニケーションがとりやすく、相談しやすいです。 タスクに対して常に疑問を持つメンバーが揃っているので、根本的な解決策を考えられる環境です。 KTCへ入社したときの入社動機や入社前後のギャップは? 前職では、エンジニアとPMを担当しておりましたが、ITの技術を駆使して事業に貢献する会社で働きたいと考えました。 大きなギャップはありませんでしたが、想像以上にスピード感をもって案件を動かしていく必要があるので、ついていくのに必死です。笑 オフィスで気に入っているところ 室町オフィスのジャンクションが、想像よりもおしゃれでした。 hidenorioka ⇒ S.Nさんへの質問 これからKINTOテクノロジーズのキャリアで挑戦してみたいことを教えていただきたいです! まずは任されたプロダクト・プロジェクトをしっかりと成功させ、経験を積んでいきたいです。そしてお客様が求めるサービスを自分発信で提案し、実現できるようにしたいです! M.H ![M.Hさんのプロフィール画像](/assets/blog/authors/hidenorioka/2025-09-12-newcomer/mh.png =300x) 自己紹介 新サービス開発部 KINTO FACTORY開発Gにジョインし、ディレクション業務を担当しています。 前職では大手事業会社でプロデューサーとして、UX領域を中心に携わっていました。 所属チームの体制は? フロントエンドエンジニア4名、バックエンドエンジニア3名、PdM1名、QAエンジニア1名という体制で、設計からQAまでを一貫して対応できるチームです。 現場の雰囲気はどんな感じ? 私は総合企画やクリエイティブ室の方と接する機会が多いため、チームメンバーとの関わりはあまり多くなく、雰囲気はよく分かりません。ただ、前職の職場は良い意味で「ピリッとしているけれど淡々と進む」雰囲気だったので、それと比べると、こちらでは楽しみながら仕事をしている印象を受けます。 KTCへ入社したときの入社動機や入社前後のギャップは? これまで長くToC向けサービスに携わってきた経験を、即戦力として活かせると感じたため。 前職が内製開発を備える事業会社だったこともあり、事業部とのやり取りや内製開発の体制の理解も持っていたため、特に大きなギャップは感じていません。 オフィスで気に入っているところ 大きな窓から差し込む自然光による明るさと開放感、そして島と島の間隔が広く圧迫感のない空間がとても気に入っています。 S.Nさん ⇒ M.Hさんへの質問 今乗っている車、または乗ってみたい車があったら教えてくださいー! ヴィンテージカーが好きなので、「 初代トヨペット クラウン 」は、永遠の憧れです。 Kevin Diu 自己紹介 DBREチームに所属しております。 前職はSoftware Engineerとして働いていました。 所属チームの体制は? 6人チームです。scrumという開発フレームワークで開発を進めています。 組織横断でDatabaseに特化したエンジニアチームです 現場の雰囲気はどんな感じ? 開発言語:Goはメイン AWSは結構使っている KTCへ入社したときの入社動機や入社前後のギャップは? 入社動機:自分の技術力で自動車業界に貢献してみたい 入社前後のギャップ:あまりないです オフィスで気に入っているところ 正直あまりない。。 M.Hさん ⇒ Kevin Diuさんへの質問 香港と日本の働き方の違いについてあれば教えてください。 実際日本で働いてみてどう感じているか教えてください。 香港人は、「時間はお金だ」という思いが常にあり、仕事や議論には常に時間を意識して進めるので、結論や成果物はささっと出ることが多いですが、日本ですと議論や原因まで究明することが多いです。どれも一長一短だと思います。 実際日本で働いてみて、想像よりみなさん優しく思っています。昔は日本のドラマをよくみていて、「半沢直樹」みたいに働かないといけないという思いがありましたが、実際は全然違いますw H.Y ![H.Yさんのプロフィール画像](/assets/blog/authors/hidenorioka/2025-09-12-newcomer/hy.png =300x) 自己紹介 いままではSierでインフラ系のシステム構築/移行/運用を実施していましたが、2025年7月からKTCにジョインました。初めての事業会社となるので、より一層、自分事として意識して業務できればと思っています。 所属チームの体制は? 自社サービスとは別となりますが、主にTMC(トヨタ自動車)案件に携わっております。 現場の雰囲気はどんな感じ? 案件によりさまざまですが、アサインされているプロジェクトでは、M365を使用しています。 KTCへ入社したときの入社動機や入社前後のギャップは? KTC社内のプロジェクトは、モダンスタイルなので、いわゆるJTCのような雰囲気がないところが良い意味でのGAPですね。 オフィスで気に入っているところ 神保町:最近リニューアルしていて、全体的にフレッシュなところがいい感じです! Kevin Diuさん ⇒ H.Yさんへの質問 最近ハマっているものは? ここ最近は Zwift やってます! H.H ![H.Hさんのプロフィール画像](/assets/blog/authors/hidenorioka/2025-09-12-newcomer/hh.jpg =300x) 自己紹介 QAグループでWebのQAを担当しています。拠点は大阪(OsakaTechLab)です。 前職でも同様にWebのQAをしておりました。(某宿泊予約サイト、某車買取サイト、などなど…) 所属チームの体制は? QAグループ全体は12名で、自身が所属しているWebチームは5名体制です。 現場の雰囲気はどんな感じ? どなたも優しく、質問もちょっとした雑談もしやすい雰囲気です。 KTCへ入社したときの入社動機や入社前後のギャップは? 入社動機:テスト自動化やAI活用といった最新技術に触れたく、KTCには既に導入事例があったため。 入社前後のギャップ:毎週のようにITに関する勉強会やイベントが社内で開催されており、いい意味で驚きました。 オフィスで気に入っているところ 「Park」と呼ばれているオープンスペースがとても素敵です。 タイヤを使用したテーブル、車の形をした椅子、横断歩道がデザインされたマット、など細部までこだわりを感じます。 H.Yさん ⇒ H.Hさんへの質問 オフィス周辺(大阪)で、おすすめのお店おしえてください! OsakaTechLabと同じビルの10階にある「浪花ろばた 頂鯛」というお店が近くて個人的におすすめです! OsakaTechLabにいらっしゃった時はぜひ! ばんぶー ![ばんぶーさんのプロフィール画像](/assets/blog/authors/hidenorioka/2025-09-12-newcomer/bamboo.jpeg =300x) 自己紹介 新卒以来、ずっと福岡で働いてます。1社目ではガラケーの開発に始まりいろんなプロジェクトに携わってました。前職の銀行では、スクラムマスターやアジャイル浸透なんかをやってました。 過去、私が入社した直後に、リーマンショックやらコロナショックやらが起きてるので、投資家の皆様は警戒しておいてください(笑) 何事も楽しく、がモットーです!技術広報として(?)非公式につぶやいてますのでフォローお願いします! ばんぶー@KINTOテクノロジーズ(@shell_in_bamboo)さん / X アイコンはAIに適当に指示しすぎて原形がなくなったものです。 所属チームの体制は? 新たな拠点・Fukuoka Tech Labで立ち上げをしてます!上司の新田さんと2人でしたが、8月に新たなメンバーが加わってくれて盛り上がってます!今後も続々と増えるはず! 技術広報も兼任していて、社内・社外のイベント活動やこのテックブログの運営などを学ばせてもらってます。前向きで活発なメンバーばかりで刺激をもらってます! 現場の雰囲気はどんな感じ? 福岡では3人で濃密な時間を過ごしています。和やかで、楽しい雰囲気で仕事してます。たまに出張者が来ると嬉しくてみんなでソワソワしてます。 技術広報は、みんな優しくてビビります。仕事も早いしビビります。ビビるって死語? KTCへ入社したときの入社動機や入社前後のギャップは? 入社動機は「天下のトヨタグループなのに圧倒的ベンチャー感」と「社長、副社長のメッセージやカジュアル面談で感じた組織文化」 入社後のギャップは「オフィスめっちゃいい・・・!」です。東京、名古屋、大阪のオフィスはめちゃくちゃオシャレで快適ですし、福岡は↓に記載のとおり。 オフィスで気に入っているところ 福岡オフィス内はまだお披露目できないんですが、窓からの眺めが最高!です。 H.Hさん ⇒ ばんぶーさんへの質問 Fukuoka Tech Labを今後どのような拠点にしていきたいか、目標や希望があれば教えてください! まだまだ人数の少ない拠点なので、いい意味で実験や新しい取り組みができたらいいなと思ってます。他の拠点から知見を取り入れ、福岡で試したことを他拠点に展開し、相互作用を生んでKTCや地域に貢献できると嬉しいです。 youhei ![youheiさんのプロフィール画像](/assets/blog/authors/hidenorioka/2025-09-12-newcomer/youhei.jpg =300x) 自己紹介 入社以来 Fukuoka Tech Lab の立ち上げ責任者として奔走しています。 前職では開発組織のマネージャーをしていました。前職と前々職でも組織の立ち上げ期を経験しているので、その経験を今回も活かせればと考えています。 所属チームの体制は? まったくのゼロベースから拠点の立ち上げに挑戦できる環境です。拠点所属のメンバーは現在3名で、採用も活発化していますしどんどん拡大していく予定です。 他の拠点のメンバーも福岡拠点の立ち上げに積極的に協力してくれるので、裁量を持って多くのプロジェクトを進行できる環境です。拠点間連携を深めることも今後していきたいですね。 現場の雰囲気はどんな感じ? 立場上全ての拠点に行ったことがあり、それぞれの特徴を掴んだつもりです。そんな中で福岡の拠点は開放的で気兼ねなく会話できる雰囲気が最大の特徴ですね。出張者も普段の責任ある立場から少しリラックスしてオープンマインドで接してくれますね。少人数でも賑やかなところがとても良いのでこの空気を今後も大切にしたいです。 KTCへ入社したときの入社動機や入社前後のギャップは? 今このタイミングで福岡に新しい開発拠点を立ち上げることに意義を感じたことが大きかったですね。トヨタグループの内製開発という大きな仕事を小さなチームで推進できることにKTCの可能性を感じました。詳しい話は 今度登壇するイベント で話す予定です。 入社して一番のギャップはこれまで在籍した企業で最も技術的にフラットな点です。特定の技術にロックインすることがないので、自分が無意識に制約していた箱の外にある技術も選択肢に入れて良いんだと自分のバイアスに気付けたのが良かったです。 オフィスで気に入っているところ 眺望ですね。自分の住む街の美しさを感じられます。福岡空港、博多と天神の市街地、博多湾、福岡タワー、眺めているだけで癒されますし、福岡という街をより好きになりました。 ばんぶーさん ⇒ youheiさんへの質問 趣味でPodcastを配信されてますが、もし誰でも呼べるとしたら、ゲストに誰を呼びたいですか??理由も教えてください! ご紹介ありがとうございます(笑)。 ほっとテック というPodcastを3年ほどやっています。ゲストを呼ぶ機会があって、いつもテック系の文脈でお声がけしてます。その制約をとって誰でも良いなら自分が好きなミュージシャンの誰かを呼んで彼らの創作に対する感謝を伝えられたら最高ですね。 K.S. ![K.S.さんのプロフィール画像](/assets/blog/authors/hidenorioka/2025-09-12-newcomer/ks.png =300x) 自己紹介 データ戦略部のデータサイエンティストです。これまで金融機関やコンサルティング会社で、クオンツやデータサイエンティストなど、定量分析の仕事をしてきました。 所属チームの体制は? プロダクト開発、データアナリスト、データエンジニア、データサイエンスの4つのグループがあり、データサイエンスではKINTO事業部やトヨタグループからの依頼を受けてデータ探索やモデル開発を実施しています。 現場の雰囲気はどんな感じ? グループごとに異なりますが、データサイエンスは自身で考えて動くことが求められます。事業寄りのグループほどよく話している印象です。 社内の雰囲気がフラットで、上席者が話を聞く姿勢で居てくれるためありがたいです。 KTCへ入社したときの入社動機や入社前後のギャップは? KTCのクライアントにはしっかりとした事業があり、その成果にデータ分析で関われる環境があります。単に数字を分析して終わりではなく、その結果が事業にどう影響したかを実際に確認できると考えて入社しました。 会社の規模が少しずつ拡大しているので変化も多いですが、想定の範囲内でしたので、入社前後のギャップはないです。 オフィスで気に入っているところ 開放的なオフィスかつ、駅近で通勤しやすいです。リモートと出社を組み合わせた柔軟な働き方ができるのが気に入っています。 youheiさん ⇒ K.S.さんへの質問 モビリティの世界に来てこれまでの世界におけるデータサイエンスと比べて同じところ、違うところを一つずつ教えてください。 大きな違いは、位置情報やセンサー情報といった、これまで金融やコンサルではあまり扱わなかった種類のデータが豊富に存在することで、分析の視点や手法にも新しい発想が求められます。その未知のデータに触れること自体が、日々の知的好奇心を強く刺激してくれます。 一方で、データを正しく理解するためには、背景となる業務知識が欠かせないという点が同じです。数値や項目の意味を把握し、文脈と照らし合わせることで初めて価値ある示唆を導き出せる点は変わらないなと感じます。 さいごに みなさま、入社後の感想を教えてくださり、ありがとうございました! KINTOテクノロジーズでは日々、新たなメンバーが増えています! 今後もいろんな部署のいろんな方々の入社エントリが増えていきますので、楽しみにしていただけましたら幸いです。 そして、KINTO テクノロジーズでは、まだまださまざまな部署・職種で一緒に働ける仲間を募集しています! 詳しくは こちら からご確認ください!
アバター