TECH PLAY

KINTOテクノロジーズ

KINTOテクノロジーズ の技術ブログ

969

こんにちは!KINTOテクノロジーズのクリエイティブ室でデザイナーをしているmayuです。 私は普段、アプリのUI/UXデザインをメインで担当していますが、今回は会社のイベントで社員に配布するノベルティの制作を手掛けました。 この記事では、ノベルティ制作の企画からデザインまでの舞台裏をお話しします。 ノベルティ制作に携わる方々のヒントになれば嬉しいです。 ノベルティの選定 テーマは「一体感を感じられるもの」 今回のイベントでは「一体感を感じられるもの」を大前提としたノベルティを目指しました。 そこで、以下のような条件をもとにアイデアを出しました。 普段話さない人とコミュニケーションを取るきっかけになる 団結意識を高める イノベーションを促進する 年齢・性別を問わず、誰でも嬉しいもの 複数人のニーズがある 誰でもすぐ利用できる手軽さ 予算は数百円〜千円程度/人 長く価値が続く さまざまな案が出ましたが、最終的に「 マグネットカードスタンド 」と「 オリジナルネームカード 」を制作することに決定しました。 「マグネットカードスタンド」と「ネームカード」を選んだ理由 マグネットカードスタンド: デスクに置いて使うことで、自然と声をかけやすくなりコミュニケーションの促進につながる。 KINTOテクノロジーズのロゴや車の形を取り入れることで、会社への愛着やモチベーション向上が期待できる。 シンプルなデザインで、日常的に使いやすく、誰でも活用できる。 ネームカード: 社員一人一人に名前を入れたネームカードを作ることで、初対面でも話しかけやすく、社内の交流を促進。 「見切れKTCデザイン」によって、会社全体としての一体感をデザインで表現。 当日は名札として、イベント後はデスクに置いて使い続けられる。 マグネットカードスタンドの制作 1. 業者選定と依頼 マグネットカードスタンドの制作は、オリジナルグッズ専門サイト「 MOKU 」に依頼しました。 MOKUはカスタマイズ性が高く、デザインデータを入稿するだけでオリジナルのマグネットカードスタンドが作れることが決め手でした。 2. プロトタイピング 紙を使って簡易的なプロトタイプを試作し、サイズ感や使い勝手を確認しました。 実際のデスクに置いてみて、視認性や実用性を検証します。 3. マグネットカードスタンドのデザイン作成 Adobe Illustratorを使って、規定のテンプレートにロゴを配置したデザインデータを作成。 KINTOテクノロジーズのロゴが際立つシンプルなデザインに仕上げました。 4. 取扱説明書のデザイン作成 使い方が分かりやすいように、オリジナルの取扱説明書も作成しました。 こちらもAdobe Illustratorを使って、規定のテンプレートに沿ってデザインデータを作成。 5. データ入稿&納品 デザインデータを入稿し、約3週間で納品されました!(数量は500個で発注) ネームカードの制作 1. ネームカードのデザイン作成 Figmaで名前、部署、そしてSlackで使っているアイコンを配置したオリジナルデザインを作成。 マグネットカードスタンドとサイズが合うよう、プロトタイプを作って確認しました。 こだわりポイントは、この「 見切れKTC 」。KTCは「KINTOテクノロジーズ」の略称です。 並んでいる小さな四角は社員を表現しており、「社員一人一人が集まってKTCになる」という意味を込めました。 黒を基調としたシンプルでスタイリッシュなデザインで、テックカンパニーらしさも演出しました。 2. データの自動生成 全員分のデータを手作業で作るのは大変なので、社内のエンジニアさんに協力してもらい HTMLでデータを自動生成しました。 CSVファイルから社員情報を取り込み、テンプレートに自動反映させる仕組みを構築しました。 3. 印刷・カット作業 会社のプリンターで印刷し、ひたすら手作業でカット。 大変でしたが、その分かなりのコストカットに成功しました!笑 プロジェクトの結果と学び ノベルティ配布後、社員の皆さんからは以下のような嬉しいフィードバックをいただきました。 「声をかけやすくなった!」 「デザインが可愛い!」 「Slackのアイコンが入っているので愛着がわく!」 ノベルティを通じて一体感が生まれ、私自身も大きなやりがいを感じました。 また、「ただデザインするだけではなく、どう使われるか?」を考えながら制作する大切さを改めて実感しました。 目的に沿ったデザインの力 を発揮できたと思います。 さいごに 今回のプロジェクトを通じて得た学びを、これからのデザイン業務にも活かしていきたいと思います。 もし「KINTOテクノロジーズって楽しそう!」と感じていただけたら、ぜひ 採用ページ もご覧ください!お話しできるのを楽しみにしています。 ご覧いただき、ありがとうございました!
アバター
Self-Introduction Hello. This is Koyama ( @_koyasoo ) from KINTO Technologies. Since the beginning of this year, I've been dedicated to promoting Agile practices and working full-time as a Scrum Master, supporting my team in growing stronger every day. Today, I’d like to share some of the things we’ve been working on. Speaking of retrospectives... Do you hold them regularly? When it comes to retrospectives, the KPT (Keep/Problem/Try) format often comes to mind. It’s practically synonymous with the practice. KPTs are widely used in many workplaces, but is it actually effective? Back in June, I attended Scrum Fest Osaka, and out of all the sessions I joined, one left a particularly strong impression on me. OODA!!!!!! (If you know, you know) Yep, it’s Ikuo ( @dora_e_m ) san. Ikuo san’s session on retrospectives really left an impression on me. Until then, I had only used KPT for retrospectives, so when I heard the words, "To keep retrospectives meaningful, we need to avoid falling into a rut," they really hit home. That made me think, "I do retrospectives all the time, I should be able to apply this right away." Or so I thought. @ card As mentioned on page 22 of the session deck, it was eye-opening to see how simply switching from KPT to YWT (which is quite similar) led to a flood of opinions. It’s amazing how just shifting your perspective can bring out so many ideas... Quoted from page 22 of Ikuo san’s document So in this article, I’ll be sharing five retrospective techniques that I’ve actually tried out since then. Summary of How to Choose the Right Retrospective Method As Ikuo san mentioned in his session, switching up between different methods to fit the situation can really help a team grow and improve. Below is a summary of each method’s key features; feel free to use it as a reference. When to Use It! Things to Keep in Mind KPT A versatile method that works anytime. Just be careful not to rely on it too much. Hot Air Balloon Great for thinking about your team’s future. Puts more focus on current challenges rather than reflecting on the past. LeanCoffee Great for discussing various topics, not just retrospectives. Can be a bit tiring since discussions are held under time pressure. Celebration Grid Ideal for fact-based discussions. Hard to generate opinions when there’s little actual implementation and more personal impressions. FunDoneLearn When you want to reflect on the positives. Negatives often get overlooked. Elephants, dead fish, and vomit When the team seems to be building up frustration. Facilitating to keep the team from falling apart. Next, let’s dive into each retrospective method in detail. Let’s explore retrospectives! I hope this article encourages you (especially if you’ve only used KPT so far) to take the first step in trying a different approach! With that in mind, I’ll walk you through how to put it into practice with as much detail as possible. Feel free to use whatever works best for you. Just give it a try, you might be surprised. It’s actually not that different from KPTs! By the way, the examples in this article mainly use the online whiteboard tool Miro . @ card 1. Hot Air Balloon This reflection method involves replacing the "hot air balloon" in the center with your own product and thinking about what kind of "baggage" it carried, what "updrafts" helped it rise, and what "clouds" might become obstacles in the future. All you need is an image of a hot air balloon and three types of sticky notes to differentiate the categories. Here’s the hot air balloon our team came up with. Diagram of a hot air balloon Here’s how it goes: First, we spent 5 minutes writing about "updrafts", followed by an 8-minute discussion. Next, we did the same with "luggage"—writing for 5 minutes, then followed by an 8-minute discussion. Then, we repeated the process with "clouds"—5 minutes of writing, followed by an 8-minute discussion. Finally, we wrapped up with a 10-minute discussion on the question: "What’s important for making a hot air balloon fly higher?" The discussion took place from this perspective (10 minutes). This method naturally encourages discussions that reflect on the present and envision the future. Compared to KPT, it breaks down problems into current issues and anticipated challenges, making discussions more focused and effective. 2. LeanCoffee Lean Coffee is a method that starts with gathering topics, then breaks them into short time-boxed discussions to carry out various conversations about them. You can make this work by setting up an area where people can add and edit sticky notes for topics and another area to process selected notes one by one for discussion. Miro had a template for this, so I gave it a try. LeanCoffee Diagram Here’s how it works. First, participants will come up with a topic (8 minutes). Giving them a suggested theme can help spark ideas and make it easier to share opinions. Use features like polling to find out which topics interest the group the most. Discussions will follow the cycle below, starting with the topic that gets the most votes. Each discussion begins with 5 minutes, including time to introduce the topic. After 5 minutes, the conversation will pause. At that point, ask participants if they’d like to continue discussing the topic. You can use a poll to decide. If they want to continue, add 3 more minutes. If not, move on to the next topic. After those 3 minutes, pause again and check if they’d like to keep going. If they want to continue, add 1 more minute. If not, move on to the next topic. Once the final minute is up, that topic wraps up. If they want to keep the conversation going, set aside extra time for it and wrap up the discussion within that time. It also gives your insight into the current interests and trends among team members. You can also see trends in the interests of your team members at any given time. Plus, it might help reinforce awareness of timeboxing among the team. Facilitating discussions can make it tricky to step in and stop them, so try using a timer or pausing at natural breaks in the conversation. Be mindful that this method relies on sticking to the timebox—if not, it could fall apart. 3. Celebration Grid This method involves discussing completed actions by dividing them into six quadrants based on two axes: one for "success" and "failure", and another for "wrong ways", "experimental ways", and "known ways". As the name suggests, the focus is on keeping a positive mindset—celebrating every outcome, whether it’s a success or a failure. It seems that people often use the diagrams from this site as a guide. @ card CelebrationGrid template As shown in the diagram, each area varies in size based on the likelihood of an event occurring, and they carry the following meanings: The discussion followed these categories. Wrong way Experimental way Known Way Success Lucky! It was a great experience! You did the right thing! Failure It was inevitable. It’s ok, there was a lesson in it. Unlucky Celebration Grid in action Here’s how it works. Ask participants to specify a time period and list "what they have done" (5 minutes). Guide them to consider where each item belongs as they list them. Take a deeper dive into each one. Wrap up by celebrating the many insights gained. At first glance, this method may seem complicated, but it's actually quite simple. Since discussions are based on actual “events and facts", participants can stay grounded and discuss things without personal biases. However, because participants must first list "facts", some may find it harder to express their opinions. To make the process smoother, it’s best to include people who have been actively involved in the work. 4. FunDoneLearn As the name suggests, this method involves listing Fun (what was enjoyable), Done (what was accomplished), and Learn (what was learned.) Write down the elements that fit into each category where the circles overlap. You can use a template like a Venn diagram with overlapping circles to make organizing easier. Making the overlapping areas larger will give more space for sticky notes and make them easier to place. FunDoneLearn Diagram It’s not something that needs a detailed explanation, but here’s how to do it: Set a time limit and have participants put sticky notes (5 minutes). Discuss with each one. This method keeps the focus positive, incorporating an element of Fun. This approach helps create a generally positive and happy atmosphere for the review. On the other hand, since it focuses less on negative aspects like Problem, it may not be the best fit if there are many issues to address. 5. Elephants, dead fish and vomit This method helps identify issues from three different angles. Members are encouraged to openly discuss things they might normally hesitate to say, categorized as follows: Elephants – Issues that everyone is aware of. Dead Fish – Issues that could cause trouble if left unaddressed. Vomit – Issues that are on one's mind. You can facilitate this exercise using a simple diagram with an elephant, a fish, and vomit. However, to prevent personal conflicts among members, it’s a good idea to clearly outline the ground rules in a visible way. Drawing of an elephant, dead fish, and vomit Here’s how it works. First, start by explaining the rules. Make it clear that the goal of this method is not to create conflict within the team, but to come up with ways to address existing issues. This is a key point to keep in mind when using this method. Ask participants to write thoughts with opinions on sticky notes that fit each category (8 minutes). In my team, to keep the discussion from becoming too negative, we encouraged participants to add a pink sticky note if they wanted to reframe an issue into a positive perspective. With this method, seeing other’s notes while brainstorming might sometimes cause discomfort. To prevent this, we make sure that sticky notes in progress are not visible to others. If you are using Miro, enabling Private Mode is a good idea. Once ready, disable Private Mode and start discussing each topic. Since this method involves addressing negative aspects, it requires a bit more sensitivity compared to other approaches. That said, the atmosphere doesn’t tend to feel too negative. Comments like "Oh, so that’s what you were worried about!" or "I was thinking the same thing!" are likely to come up. This method is highly effective in aligning the team’s approach to problem-solving. What Applies to Any Retrospective Having conducted six retrospectives, including KPT, I’ve noticed that there are more commonalities than you might expect. The ultimate goal is always to "agree on the next action as a team". Writing the rules in large letters helps prevent confusion. It’s totally fine to share opinions during the session! Time guidelines: 5 minutes for basic topics, 8–10 minutes for deeper discussions. Discussion flow: First, explain the sticky notes, then either "share your own thoughts" or "randomly ask someone who might have an opinion". (Keeping it casual makes it easier for everyone to speak up. Haha!) If you’re also sharing your own thoughts, preparing your sticky notes in advance helps you stay focused on facilitating. Most importantly, as long as everyone is aligned on "agreeing on the next action", any method will work. Once that’s decided, the retrospective will be worthwhile. You can almost forget about everything else. Feedback from Participants I always feel a bit anxious after implementing a new retrospective method with new members. Would KPT have been just fine...? Did I give too many instructions, leaving little time for actual reflection...? If you ever feel this way (like I do), don’t hesitate to ask your team for feedback! You’ll probably hear nothing but positive responses. It was refreshing to do a different kind of retrospective. It was fun. (Hot Air Balloon) We were able to talk while being conscious of the time box, so we were able to talk about a variety of topics, which is good as we usually end up talking about the same thing. The issues became clear. (LeanCoffee) We made a lot of mistakes, but this helped us distinguish between good mistakes and bad ones. (Celebration Grid) It was great to understand what makes my team members enjoy their work. I liked that we could simply share fun experiences. (FunDoneLearn) I’m glad that the issues I had in mind became a shared understanding within the team. I appreciate how direct and open the discussion was. (Elephants, dead fish, vomit) Summary Rather than sticking to just one method, teams can grow stronger by choosing and applying the retrospective format that best fits the situation. I’ve only tried six so far, but I'm excited to explore even more! As mentioned at the beginning, if you’ve only used KPT, I highly encourage you to try others too. Start by following the steps outlined in this article. Once you’re comfortable, why not tweak and adapt approaches to better fit your team? I’d be happy if this article helps Scrum Masters who are looking for better ways to run retrospectives.
アバター
Integrating Native Features into Flutter Apps – Our Approach to Adding an Android-Specific Camera Analysis Library Hello. My name is Osugi, and I’m part of the Toyota Woven City Payment development group. Our team develops the payment system used in Woven by Toyota ’s Toyota Woven City , covering a wide range of payment-related functions, from backend to Web frontend and mobile applications. So far, we’ve been using Flutter to develop a mobile app for Proof of Concept (PoC). In this article, we have summarized the trial and error we went through to overcome the challenges we faced when developing new functions by incorporating a new camera analysis library that is only available natively on Android/iOS into the PoC app. Introduction Integrating native functions into a Flutter app doesn’t just add to the development workload—it also increases maintenance costs, making development more challenging. In our project, considering the development timeline and available resources, we chose not to integrate native functions directly into the Flutter app. Instead, we developed a separate PoC app and a native app for camera analysis, linking them together to carry out the PoC. After completing the PoC, when we considered integrating the Flutter app with the camera analysis app, we found that the information on design guidelines and implementation methods for Flutter's native linking function was fragmented, and we felt that there were few systematic guidelines, especially for Android's complex UI configuration. In this article, we’ll focus on Android and share design principles and practical methods for incorporating native UI into a Flutter app. Hopefully, this will be helpful for engineers facing similar challenges. :::message At the time of writing, the sample code was created using Flutter v3.24.3 / Dart v3.5.3 ::: App Overview For the purposes of this article, we’ve simplified the app developed during the actual PoC. The app follows these specifications: Specifications When you press the start button, the camera preview will be displayed. The camera analysis function runs on the preview image, and the analysis results are sent as notifications. In this article, I would like to talk about this app. Data Integration Between Flutter and Native Android We implemented data exchange between Flutter and Android native using MethodChannel and EventChannel , enabling camera control from Flutter and analysis result notifications from Android native. MethodChannel is used for commands like starting and stopping the camera, while EventChannel is used for sending analysis result notifications. The sequence diagram below illustrates this process: sequenceDiagram actor u as User participant f as Flutter participant mc as MethodChannel participant ec as EventChannel participant an as Android Native u ->> f: press start button activate f f ->> mc: start camera mc ->> an: set up camera an -->> mc: mc -->> f: result deactivate f loop an ->> ec: analyzed result ec ->> f: send analyzed data f ->> f: show data end u ->> f: press stop button activate f f ->> mc: stop camera mc ->> an: reset camera an -->> mc: mc -->> f: result deactivate f Next, I would like to talk about how to display the Android native camera preview UI on the Flutter side. How to display native Android UI in a Flutter app There are three main ways to display native Android UI in a Flutter app: Texture widget – Displays an image rendered on an Android native Surface within the Flutter Widget tree. PlatformView – Embeds, displays, and controls Android native UI inside the Flutter widget tree. Intent – Launches a new Activity. We’ll go over the characteristics of each method and how to implement them. Texture Widget The Texture widget displays an image rendered on an Android native Surface within the Flutter Widget tree. In other words, it allows Flutter to draw native UI images directly to the GPU. This approach works well for use cases where latency isn’t a major concern, such as camera previews and video playback. However, for UI animations requiring real-time performance, adjustments must be made on the native side. This means a solid understanding of both Flutter and Android native development is necessary. Additionally, the Texture widget itself does not detect user interactions like touch events, so this must be handled on the Flutter side using GestureDetector or similar. That said, if it aligns with your requirements, it can be implemented relatively easily using the approach shown below. Implementation Steps First, obtain TextureRegistry . For Flutter apps, use FlutterEngine.FlutterRenderer ,which implements TextureRegistry . For Flutter plugins, retrieve it from FlutterPluginBinding. // For Flutter apps val textureRegistry = this.flutterEngine.renderer // For Flutter plugin val textureRegistry = this.flutterPluginBinding.textureRegistry Next, create a textureEntry , which is a SurfaceTexture , from the textureRegistry , then set up a SurfaceProvider to provide a Surface to the CameraX preview instance. Once this is done, you’re all set. This Surface acts as the drawing buffer mentioned earlier. val textureEntry = textureRegistry.createSurfaceTexture() val surfaceProvider = Preview.SurfaceProvider { request -> val texture = textureEntry?.surfaceTexture() texture?.setDefaultBufferSize( request.resolution.width, request.resolution.height ) val surface = Surface(texture) request.provideSurface(surface, cameraExecutor) { } } val preview = Preview.Builder().build().apply { setSurfaceProvider(surfaceProvider) } // To meet the requirements for camera analysis mentioned at the beginning of the article, // this can be achieved by setting up a cameraProvider and configuring the Preview and analysis processing for the camera. try { camera = cameraProvider?.bindToLifecycle( this, CameraSelector.DEFAULT_BACK_CAMERA, preview, analysis, // Set the camera image analysis process here ) } catch(e: Exception) { Log.e(TAG, "Exception!!!", e) } Then, simply return the ID of the TextureEntry associated with the Surface to Flutter as the return value of MethodChannel . fun onMethodCall(call: MethodCall, result: MethodChannel.Result) { when(call.method) { "startCamera" -> { result.success(textureEntry.id()) } "stopCamera" -> { stopCamera() } else -> result.notImplemented() } } To render a native SurfaceTexture on the Flutter side, simply set the textureId obtained from MethodChannel to the Texture widget, and the camera preview will appear in the Flutter app. static const platform = MethodChannel('com.example.camera_preview_texture/method'); int? _textureId; Future<void> onPressed() async { try { final result = await platform.invokeMethod<int>('startCamera'); if (result != null) { setState(() { _textureId = result; }); } } on PlatformException catch (e) { print(e.message); } } Widget build(BuildContext context) { if (_textureId == null) { return const SizedBox(); } return SizedBox.fromSize( size: MediaQuery.of(context).size, child: Texture( textureId: _textureId!, ), ); } For an implementation using the Texture widget, the mobile_scanner serves as a great reference. PlatformView PlatformView allows embedding Android native UI into Flutter’s widget tree, making it possible to display and control it. There are three rendering modes for PlatformView : Virtual Display ( VD ), Hybrid Composition ( HC ), and TextureLayerHybridComposition ( TLHC )[^1]. When using the PlatformView API, TLHC is selected by default. However, if the Android native UI tree contains SurfaceView , it will fall back to VD or HC [^2]. In addition, Texture improves frame rate synchronization between Flutter and Android native, which was not possible with the Texture widget. It also allows user interaction control and supports displaying UI elements beyond just camera previews and videos. Implementation Steps In this sample code using PlatformView , the camera preview screen is implemented with Jetpack Compose. To use Jetpack Compose in a Flutter app, add the following dependencies and configuration to app/build.gradle : android { ~ ~ buildFeatures { compose true } composeOptions { kotlinCompilerExtensionVersion = "1.4.8" } } dependencies { implementation("androidx.activity:activity-compose:1.9.3") implementation(platform("androidx.compose:compose-bom:2024.04.01")) implementation("androidx.compose.material3:material3") } Now, let’s dive into the details of the implementation. Implementing PlatformView requires the following three steps: Implement NativeView that inherits PlatformView Implement NativeViewFactory that inherits PlatformViewFactory Register PlatformViewFactory to FlutterEngine 1. Implementing NativeView For a general implementation, please refer to Official . One key difference from the official approach is that this implementation uses Jetpack Compose. Here, the CameraPreview (built with Jetpack Compose) is embedded into the Android native View tree using ComposeView . class NativeView(context: Context, id: Int, creationParams: Map<String?, Any?>?, methodChannel: MethodChannel, eventChannel: EventChannel) : PlatformView { private var nativeView: ComposeView? = null override fun getView(): View { return nativeView!! } override fun dispose() {} init { nativeView = ComposeView(context).apply { setContent { CameraPreview(methodChannel, eventChannel) } } } } In the Jetpack Compose implementation, PreviewView from CameraX, which is a View , is Composed using AndroidView . As a side note, AndroidView can also be used for Fragment . @Composable fun CameraPreview(methodChannel: MethodChannel, eventChannel: EventChannel) { val context = LocalContext.current val preview = Preview.Builder().build() val previewView = remember { PreviewView(context) } suspend fun startCamera(context: Context) { val cameraProvider = context.getCameraProvider() cameraProvider.unbindAll() // To meet the requirements for camera analysis mentioned at the beginning of the article, // this can be achieved by setting up a cameraProvider and configuring the Preview and analysis processing for the camera. cameraProvider.bindToLifecycle( LocalLifecycleOwner.current, CameraSelector.Builder().requireLensFacing(CameraSelector.LENS_FACING_BACK).build(), preview, analysis, // Set the camera image analysis process here ) preview.surfaceProvider = previewView.surfaceProvider } suspend fun stopCamera(context: Context) { val cameraProvider = context.getCameraProvider() cameraProvider.unbindAll() } LaunchedEffect(Unit) { fun onMethodCall(call: MethodCall, result: MethodChannel.Result) { when(call.method) { "startCamera" -> { runBlocking { CoroutineScope(Dispatchers.Default).launch { withContext(Dispatchers.Main) { startCamera(context) } } } result.success("ok") } "stopCamera" -> { runBlocking { CoroutineScope(Dispatchers.Default).launch { withContext(Dispatchers.Main) { stopCamera(context) } } } } else -> result.notImplemented() } } methodChannel.setMethodCallHandler(::onMethodCall) } AndroidView(factory = { previewView }, modifier = Modifier.fillMaxSize()) } Next, 2. implement NativeViewFactory and 3. register it to FlutterEngine as follows. class MainActivity: FlutterFragmentActivity() { ~ ~ override fun configureFlutterEngine(flutterEngine: FlutterEngine) { super.configureFlutterEngine(flutterEngine) val methodChannel = MethodChannel( flutterEngine.dartExecutor.binaryMessenger, METHOD_CHANNEL ) val eventChannel = EventChannel( flutterEngine.dartExecutor.binaryMessenger, EVENT_CHANNEL ) flutterEngine .platformViewsController .registry .registerViewFactory(VIEW_TYPE, NativeViewFactory(methodChannel, eventChannel)) } } class NativeViewFactory( private val methodChannel: MethodChannel, private val eventChannel: EventChannel ) : PlatformViewFactory(StandardMessageCodec.INSTANCE) { override fun create(context: Context, viewId: Int, args: Any?): PlatformView { val creationParams = args as Map<String?, Any?>? return NativeView( context, viewId, creationParams, methodChannel, eventChannel ) } } Finally, here is the implementation on the Flutter side. PlatformViewsService.initSurfaceAndroidView() is an API for using either TLHC or HC . PlatformViewsService.initAndroidView() allows you to use either TLHC or VD . PlatformViewsService.initExpensiveAndroidView() forces the use of HC . class CameraPreviewView extends StatelessWidget { final String viewType = 'camera_preview_compose'; final Map<String, dynamic> creationParams = <String, dynamic>{}; CameraPreviewView({super.key}); @override Widget build(BuildContext context) { return PlatformViewLink( viewType: viewType, surfaceFactory: (context, controller) { return AndroidViewSurface( controller: controller as AndroidViewController, hitTestBehavior: PlatformViewHitTestBehavior.opaque, gestureRecognizers: const <Factory<OneSequenceGestureRecognizer>>{}, ); }, onCreatePlatformView: (params) { return PlatformViewsService.initSurfaceAndroidView( id: params.id, viewType: viewType, layoutDirection: TextDirection.ltr, creationParams: creationParams, creationParamsCodec: const StandardMessageCodec(), onFocus: () { params.onFocusChanged(true); }, ) ..addOnPlatformViewCreatedListener(params.onPlatformViewCreated) ..create(); }, ); } } By using PlatformView this way, you can integrate Android native UI into your Flutter app. Intent Intent is an Android feature (not specific to Flutter) that allows launching an Activity separate from the MainActivity where Flutter runs. With Intent, you can navigate to another screen within your app, launch external apps, and exchange data between Activities. The two methods mentioned above (Texture widget and PlatformView) have been reported to have performance issues [^3]. To resolve these issues, a deep understanding of both Flutter and Android native is essential. In some cases, building a separate Android app might actually help keep development costs down. However, this poses a different challenge. If your team only has Flutter engineers, you will need to catch up on Android development. If the app is developed as an external application, the interface between apps must include security measures and be designed with lifecycle considerations in mind. For instance, the following measures may be necessary: Validate the data exchanged between activities. Restrict access so that only a specific app can call it. Ensure the called app functions correctly even if the calling app has been killed. Now let’s take a look at how to use Intent in Flutter. First, we’ll go over how to call another Activity from a Flutter app. Calling Activity (MainActivity where the Flutter app runs) override fun onMethodCall(call: MethodCall, result: MethodChannel.Result) { if (call.method!!.contentEquals("startCamera")) { val dummyData = call.argument<String>("dummy_data") ?: return result.error( "ERROR", "data is invalid", null ) // In case of screen transition val intent = Intent(this, SubActivity::class.java) // For external apps val packageName = "com.example.camera_preview_intent" val intent = activity.packageManager.getLaunchIntentForPackage(packageName) ?: return result.error( "ERROR", "unexpected error", null ) intent.setClassName(packageName, ".SubActivity") // Store the sending data intent.putExtra("EXTRA_DUMMY_DATA", dummyData) intent.setFlags(Intent.FLAG_ACTIVITY_SINGLE_TOP) activity.startActivityForResult(intent, REQUEST_CODE) } } override fun onListen(arguments: Any?, sink: EventChannel.EventSink?) { eventSink = sink } override fun onCancel(arguments: Any?) { eventSink = null } override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?): Boolean { if (requestCode == REQUEST_CODE && resultCode == Activity.RESULT_OK && data != null) { val result = data.getStringExtra("RESULT_DATA") ?: "", eventSink?.success(result) return true } return false } Next, let’s implement the Activity that gets called from the Flutter app. Once a specific operation is completed, you can use Intent to return data, as shown below. Target Activity val intent = Intent() intent.putExtra("RESULT_DATA", resultData) activity.setResult(Activity.RESULT_OK, intent) finish() By using Intent this way, you can avoid dealing with complex UI control on both the Flutter and native Android sides while enabling data exchange between Flutter and native Android Activities. However, security and data integrity must be carefully considered in this approach. Summary In this article, we've discussed how to incorporate native functionality into Flutter apps, with a focus on Android. Data communication between Flutter and Android native was achieved using MethodChannel and EventChannel . Here’s how to incorporate Android native UI into Flutter: Texture widget Great for camera previews and video displays, and relatively easy to implement. However, it requires handling user interactions and may have some performance issues. PlatformView Lets you integrate native UI into Flutter’s widget tree while enabling user interaction control. Supports embedding View, Fragment, and Jetpack Compose. Performance can also be an issue. Intent Allows seamless screen transitions and launching of other apps, making it possible to directly display Android’s UI and exchange data. However, security and data handling require careful attention. As mentioned above, each method comes with its own strengths and limitations when integrating Android native features into a Flutter app. The best choice depends on your project’s specific needs. Notes The thumbnail of the Droid is reproduced or modified from work created and shared by Google and used according to terms described in the Creative Commons Attribution 3.0 License. [^1]: Hosting native Android views in your Flutter app with Platform Views [^2]: Android Platform Views [^3]: Performance
アバター
This article is the entry for day 16 in the KINTO Technologies Advent Calendar 2024 🎅🎄 Hi, I’m Nakanishi from the Manabi-no-Michi-no-Eki (Learning Roadside Station) team. This year, the Manabi-no-Michi-no-Eki project was officially launched and later established as a team. As part of this initiative, we're also hosting an in-house podcast, and for this year’s Advent Calendar, we’d love to share more episodes with you. What is the Manabi-no-Michi-no-Eki (Learning Roadside Station)? It’s a project aimed at making the frequently held in-house study sessions more accessible and effective. The initiative is led by passionate volunteers within the company, with the goal of supporting study sessions and fostering a culture of knowledge sharing across the organization. 10X Innovation Culture Program The Learning Roadside Station Podcast features interviews with employees who organize study sessions within the company. This segment is called “A Peek into the Study Session Next Door”. For today’s podcast, we’re joined by Awata-san and HOKA-san, who are working on the 10X Innovation Culture Program being provided by Google. Usually, HOKA-san conducts the interviews, but today, Akeda-san and I will be taking on that role. So, without further delay, let's jump right into the interview. Interview Awata-san: Thank you for having me. My regular work focuses on platform engineering, ensuring that database-based operations are accessible to everyone. Besides that, I’m interested in corporate culture, so I’m doing a variety of activities. HOKA-san: I usually work in the Human Resources Group’s Organizational Human Resources Team. We plan and conduct training by identifying needs and challenges through interviews with both new and existing employees. Akeda-san: Please tell us what prompted you to hold these study sessions. Awata-san: As part of our Google Cloud Enterprise User Group ( Jagu’e’r ) I took part in a subcommittee for thinking about corporate culture and innovation. As part of our activities, we decided to try the 10X Innovation Culture Program, gathered around 15 volunteers, and went ahead with it. HOKA-san was also among them, and things took off from there. HOKA-san: Yes, that’s right. When we held our first event at the Google office in Shibuya, the reaction from the participants was extremely good. Collaborating in a workshop with people we usually had no interaction with opened up new opportunities for communication. Akeda-san: Next, please tell us some details about the events. How did they expand after the initial session? Awata-san: We initially held them at the Google office, then subsequently shifted to holding them in-house. The in-house events also drew a large number of participants and got an extremely positive reaction. HOKA-san: Seeing KTC employees engage so positively, the Google team also expressed high praise. We hope to go on spreading this program further both inside and outside the company. Akeda-san: What are the prospects for the future? Awata-san: In the future, we’d like to become certified facilitators, and get to spread 10X to other companies as well. HOKA-san: First, we plan to roll it out to other in-house groups and cultivate a culture of innovation across KTC. Akeda-san: What kind of organization do you envision for KTC? Awata-san: I want to make it a vibrant hive of thinking outside the box, flexible communication, and collaboration. HOKA-san: I want to create a culture where people can take on challenges without fearing failure. To achieve this, I plan to utilize the 10X methods. Akeda-san: Finally, could you share a message with all our listeners? Awata-san: Culture isn’t something that can be imposed; it naturally emerges from people's actions. If you're interested, we’d love for you to join us. HOKA-san: If you're interested, feel free to start by just taking a look—don’t hesitate to reach out. Through the 10X Innovation Culture Program, we aim to make KTC a more collaborative and supportive organization to work in. If you’re interested, please contact Awata-san or HOKA-san. In this article, we shared insights into the 10X Innovation Culture Program, its background, and what the future may hold for it. We hope you’re looking forward to the next study session as well!
アバター
This article is part of Day 15 of the KINTO Technologies Advent Calendar 2024 🎅🎄 Hi, I’m Nakanishi from Learning Roadside Station. This year, the Learning Roadside Station project was officially launched and structured as an organization. As part of our initiatives, we also run an in-house podcast, and for this year's Advent Calendar, we’d like to share more about it. What is "Learning Roadside Station"? "Learning Roadside Station" is a project launched to make the frequent study sessions held within the company more convenient and effective. The aim is to support the holding of study sessions, mainly by volunteers within the company, and to promote knowledge sharing within the company. Osaka Tech Lab Information Sharing Meeting In the KTC Learning Roadside Station Podcast, we interview people who hold in-house study sessions. This segment is called "Surprise! Our Neighbor’s Study Group." Today's podcast guests are Okita-san and Fukuda-san, who are leading the Osaka Tech Lab Information Sharing Meeting. Could you start by introducing yourselves? Interview Okita-san: Yes, my name is Okita. I belong to the mobile app development group, and as a development PM, I am responsible for connecting the mobile development team with other groups. I look forward to our discussion today. Fukuda-san: My name is Fukuda. I joined KTC (formerly KINTO Corporation) in July 2020 and worked in the Production Group. I took 10 months of parental leave, and in February 2024, I returned to work. I am now part of the Creative Division, where I manage the operation and renewal of KTC's corporate website. I look forward to our discussion today. Hoka-san: Thank you. Could you tell us about what inspired you to start the Osaka Tech Lab Information Sharing Meeting? Fukuda-san: Osaka Tech Lab was launched in April 2022. At first, it was just Tomonaga-san from the Analysis Group running it alone. As more members joined, we started hearing comments like, "I don't even know what the person sitting next to me is working on." That’s when we decided to start an information-sharing meeting to improve communication. Okita-san: Fukuda-san was the founder of the Osaka Tech Lab Information Sharing Meeting. At first, we started with self-introductions, and by sharing our hobbies, we aimed to find common ground and build connections among members. Hoka-san: I see. So, Okita-san, you have been involved since the very first session and helped promote the initiative. How has it evolved over time? Okita-san: In the beginning, there were only a few members, so we could just gather and share ideas casually. Now, in our 17th session, the number of members has grown, and naturally, the format of our meetings has evolved as well. Hoka-san: What does it mean that Osaka Tech Lab is the main player in this event? Okita-san: The purpose is to stimulate communication within Osaka Tech Lab. We encourage members to share their work and initiatives, fostering stronger horizontal connections across teams. Additionally, our discussions often lead to tech blog content, helping us document and share insights more effectively. Hoka-san: That sounds great! It feels like the initiative is truly taking shape. Have you noticed any changes in the reactions of the participants or the atmosphere? Fukuda-san: At first, the meetings were casual and conversational, but over time, they have become a space for discussing challenges. We now also use these sessions to talk about how to improve our office environment. Okita-san: For example, we didn’t have a clock in the office, so we installed one, and also added bookshelves. It’s a continuous collaborative effort where we share ideas and implement improvements together. Hoka-san: What are your plans for the future of the Osaka Tech Lab Information Sharing Meeting? Okita-san: As our organization grows, I want to maintain the friendly and open atmosphere we have built while continuing to prioritize communication. Fukuda-san: I want to promote KTC from Osaka. We’ll consider whether to continue the information-sharing meetings in their current form or evolve into a new format. Hoka-san: What if other department members want to participate? Okita-san: We are recruiting LT speakers every month, so we’d love for more people to participate! If you’re interested, feel free to reach out to any Osaka Tech Lab member. Fukuda-san: After the information-sharing meeting, we also hold a beer bash, so we hope everyone uses it as an opportunity to connect and communicate in a relaxed setting. Hoka-san: Lastly, do you have a message for our audience? Okita-san: Please visit Osaka! We’d love to have you here. Fukuda-san: Bring an LT (Lightning Talk) and join our information-sharing meetings! We’d love to have you here. Hoka-san: Thank you both for your time today. I can really see how your efforts in Osaka are creating a positive impact across the company. Thank you Okita-san and Fukuda-san. That's the summary of the interview. This article conveys the importance and role of the Osaka Tech Lab Information Sharing Meeting to our readers. This time, we have provided details about the Osaka Tech Lab information sharing session, the background to its operation, and future prospects. Please look forward to the next study session!
アバター
Introduction Hello! My name is Yoo, and I am a member of the New Car Subscription Development Group at KINTO Technologies. While our approach may not be perfect, we continuously strive to improve challenges step by step. In this article, I’d like to share how we implemented Redis Pub/Sub in Spring Boot to dynamically change the system date. Background and Motivation When conducting QA and testing, there are many cases where it is necessary to change the system date to verify specific behaviors. This is especially important for subscription-based services, where business logic often depends on specific dates. For example, testing requires validation of processes related to start and end dates of the period, monthly fees, settlement charges for mid-term cancellations, maintenance inspections, and vehicle inspections. Previously, the system date was defined in the configuration file, meaning that every time the date needed to be changed, the container had to be redeployed. As a result, each test or QA cycle required more than five minutes just for redeployment, significantly impacting efficiency. In this article, I will introduce how we solved this issue and improved our workflow. Benefits of introducing Redis Pub/Sub By implementing Redis Pub/Sub, we optimized system date changes in test environments, making them more efficient and responsive. As a result, we have successfully reduced the workload for testing and QA, leading to improved operational efficiency. Specifically, container redeployment is no longer required. Instead, by simply sending a message (the desired setting value) to the corresponding setting item (topic), each container can instantly receive the update and apply the changes in real time. Even in multi-container environments, all subscribers receive the message simultaneously, allowing system settings to be updated across multiple containers without requiring a restart. Additionally, system date changes are now logged, making it possible to track and review change history when needed. Furthermore, with Spring Boot Profile settings, this feature can be enabled exclusively in designated test environments, preventing accidental application to production or other environments. *For more details on Profiles, see here . What is Redis Pub/Sub Redis Pub/Sub is one of the messaging patterns used in message queuing systems. Message Queuing is a method of asynchronous communication commonly used in serverless and microservices architectures to enable real-time event notification in distributed systems. This mechanism is widely used not only as a database and cache but also as a message broker, as it supports scalable and stable communication between different software modules. Main components Topic: The subject or category that subscribers listen to. Publisher: Sends messages related to a specific topic. Subscriber: Receives messages from publishers for subscribed topics. Keyspace Notifications Redis can monitor real-time changes to keys and values by receiving events that impact the Redis dataset in various ways. How is it implemented? System date change mechanism We implemented an API as a publisher to send messages to designated topics. When an event occurs for a subscribed topic (key), multiple containers (subscribers) receive the message and update the settings in real time. System configuration It is built using Java and Spring Boot. Applications are containerized and run in a cloud environment. Adding the necessary library to build.gradle implementation 'org.springframework.data:spring-data-redis' Implementing the RedisConfig class @AllArgsConstructor @Configuration public class RedisTemplateConfig { private final RedissonClient redissonClient; @Bean public RedisTemplate<String, String> redisTemplate() { RedisTemplate<String, String> template = new RedisTemplate<>(); template.setConnectionFactory(new RedissonConnectionFactory(redissonClient)); template.setDefaultSerializer(new StringRedisSerializer()); return template; } @Bean public RedisMessageListenerContainer redisContainer() { RedisMessageListenerContainer container = new RedisMessageListenerContainer(); container.setConnectionFactory(new RedissonConnectionFactory(redissonClient)); return container; } } Implementing the Publisher Implement an API that sends messages to a designated topic. @RestController public class SystemTimeController { private final SomeService service; @PostMapping("/update") public void updateSystemTime(@RequestParam String specifiedDateTime) { service.publish(specifiedDateTime); } } @Service @RequiredArgsConstructor public class SomeService { private final RedisTemplate<String, String> redisTemplate; // Define the topic key private static final String FOO_TOPIC = "foo-key"; public void publish(String specifiedDateTime) { // Send a message to the specified topic redisTemplate.opsForValue().set(FOO_TOPIC, specifiedDateTime); } } Implementing the Subscriber The subscriber receives messages when an event occurs for a subscribed topic (key). @Slf4j @Component @Profile("developer1, developer2") // Only enabled for the specified test environment profiles public class FooKeyspaceEventMessageListener extends KeyspaceEventMessageListener { private final RedisMessageListenerContainer listenerContainer; private final RedisTemplate<String, String> redisTemplate; private static final String FOO_TOPIC = "foo-key"; @Override public void init() { doRegister(listenerContainer); } public FooKeyspaceEventMessageListener( RedisMessageListenerContainer listenerContainer, RedisTemplate<String, String> redisTemplate) { super(listenerContainer); this.listenerContainer = listenerContainer; this.redisTemplate = redisTemplate; } @Override protected void doHandleMessage(Message message) { // Retrieve the system date from Redis String systemTime = updateSystemTimeConfig(redisTemplate.opsForValue().get(FOO_TOPIC)); // Create and invoke the method to update the system date updateSystemTimeConfig(systemTime); log.info("Receive a message about FOO_TOPIC: {}", message); } } Finally Thank you for reading this article to the end. There are still many areas for improvement, but we strive to address challenges step by step and grow with each iteration. While the structure and implementation may not be perfect, I believe that gradual progress in the right direction is what truly matters. We will continue to learn, ensuring that these small advancements accumulate and lead to even better outcomes. I hope we can continue this journey of growth together. Thank you very much.
アバター
はじめに こんにちは!Yao Xieです。 KINTO Technologies のモバイルアプリ開発グループで、 KINTO かんたん申し込みアプリ のAndroidアプリを開発しています。本記事では、AGSL(Android Graphics Shading Language)を活用してカスタムUIコンポーネントを向上したり、Androidアプリで高度な画像処理をする方法を紹介します。 AGSLとは AGSL (Android Graphics Shading Language)は、Android 向けに設計されたGPUベースのシェーディング言語のことです。Skia Shading Language (SKSL)をベースにしたAGSLは、高度なグラフィックエフェクトを生みだせるAndroid特有の最適性を備えています。AGSLはAndroidのレンダリングパイプラインと完全に統合しているので、複雑なビジュアルエフェクトを効率良くスムーズに実装できます。 GLSLからSKSLへ、そしてAGSLへ グラフィックスシェーディング言語は、現代のアプリで求められている、高品質なグラフィックへの需要に応えるために大きく進化してきました。簡単にまとめると: GLSL(OpenGLシェーディング言語): リジナルのシェーディング言語で、2D・3DグラフィックのレンダリングにOpenGLと併用されます。GLSLのおかげで、GPU上で動作するカスタムシェーダーを書き出すことができます。 SKSL(Skiaシェーディング言語): Skiaグラフィックスライブラリの一部として導入されています。SKSL は 2DグラフィックをレンダリングするためにAndroidなどいろいろなプラットフォームで使われています。 AGSL(Android Graphics Shading Language): Android用に特化してデザインしてあるシェーディング言語です。SKSLの機能をベースに、Androidのレンダリングパイプラインとスムーズに統合できるように調整してあります。 GLSL、SKSL、AGSLの主な違い AGSLはモバイルデバイス向けに最適化されていて、GLSLよりもパフォーマンスが高く、消費電力が低いです。Androidレンダリングパイプラインと統合していることで、より効率良くグラフィックをレンダリングすることができます。 GLSL: OpenGL用の、C言語に似た構文です。 クロスプラットフォーム対応ではあるものの、OpenGL ESのバリエーションの影響でAndroidでは制限があります。 SKSL: GLSLに似ていますが、Skiaの2Dグラフィック向けに最適化してあります。 主にSkia内部で使用されていて、Androidの直接的な開発ではあまり利用できません。 AGSL: SKSLをベースにしつつ、 Android特有の向上性を備えています。 Androidのグラフィックパイプラインと完全に統合していて、最適なパフォーマンスを発揮します。 AGSLの仕組みは? 下の図は、AGSLシェーダー文字列がAndroidのグラフィックレンダリングシステムやデータフロープロセスの中でどの位置にあるかを示す階層図(上から下への順序)です。(概念的な図なので、正確なシステムアーキテクチャではありません。) はじめに Step 1:グラデーションシェーダを定義する AGSLを使用して、テキストにのみ滑らかなグラデーションエフェクトを加えるシェーダーファイルを作成します。コンポーザブルインプットによって、グラデーションがテキストのアルファマスクにしっかりと適切に反映できます。 @Language("AGSL") val gradientTextShader = """ uniform float2 resolution; // Text size uniform float time; // Time for animation uniform shader composable; // Input composable (text mask) half4 main(float2 coord) { // Normalize coordinates to [0, 1] float2 uv = coord / resolution; // Hardcoded gradient colors half4 startColor = half4(1.0, 0.65, 0.15, 1.0); // Orange half4 endColor = half4(0.26, 0.65, 0.96, 1.0); // Blue // Linear gradient from startColor to endColor half4 gradientColor = mix(startColor, endColor, uv.x); // Optional: Add a subtle animation (gradient shifting) float shift = 0.5 + 0.5 * sin(time * 2.0); gradientColor = mix(startColor, endColor, uv.x + shift * 0.1); // Use the alpha from the input composable mask half4 textAlpha = composable.eval(coord); // Combine the gradient color with the composable alpha return gradientColor * textAlpha.a; } """.trimIndent() Step 2:シェーダ用Modifierを作成する テキストにグラデーションシェーダを適用するカスタム Modifierを定義します。このシェーダは、ダイナミックなタイムパラメータを活用して、グラデーションをアニメーション化します。 fun Modifier.gradientTextEffect(): Modifier = composed { val shader = remember { RuntimeShader(gradientTextShader) } var time by remember { mutableStateOf(0f) } // Increment animation time LaunchedEffect(Unit) { while (true) { time += 0.016f // Simulate 60 FPS delay(16) } } this.graphicsLayer { shader.setFloatUniform("resolution", size.width, size.height) shader.setFloatUniform("time", time) renderEffect = RenderEffect .createRuntimeShaderEffect(shader, "composable") .asComposeRenderEffect() } } Step 3:シェーダをテキストコンポーネントに適用する UIでModifier.gradientTextEffectを使用して、グラデーションエフェクトを適用します。 @Composable fun GradientTextDemo() { Box( modifier = Modifier .fillMaxSize() .padding(16.dp), contentAlignment = Alignment.Center ) { Text( text = "Gradient Text", fontSize = 36.sp, fontWeight = FontWeight.Bold, color = Color.White, modifier = Modifier.gradientTextEffect() ) } } 結果 AGSLで他にできることってこれだけ? AGSLの機能は基本をはるかに超えて、ダイナミックで魅力的で、高パフォーマンスなアプリ体験を作り上げるのをサポートしてくれます。ここからは、AGSLでアプリをさらにレベルアップする方法を、実例をもとに探ってみましょう。 1.UIコンポーネントを強化 AGSLを使えば、ユーザーの心を捉えて、アプリの目的を際立たせる魅力的なUI要素を作成することができます。 アニメーションボーダー: カード、ボタン、または画像の周囲にマーキーエフェクトや点滅エフェクトを作成します。 カスタムグラデーション: ダイナミックに流れる、アニメーション化したGPUアクセラレーショングラデーションを実装すします。 Dynamic Glow エフェクト: ボタンやスライダーに、光るハイライト後光を追加します。 例:運転スキルのトレーニングアプリ 運転スキルのトレーニングアプリを開発していると想像してみてください。目標は、インターフェイスを視覚的に惹きつけるものにして、ユーザーが「トレーニング開始」ボタンのように大切な要素を操作できるようにすることです。AGSLがDynamic Glowエフェクトをどのように実現するのかをご紹介します。 AGSL シェーダコード: @Language("AGSL") val glowButtonShader = """ // Shader for a glowing rounded rectangle button uniform shader button; // Input texture or color for the button uniform float2 size; // Button size uniform float cornerRadius; // Corner radius of the button uniform float glowRadius; // Radius of the glow effect uniform float glowIntensity; // Intensity of the glow layout(color) uniform half4 glowColor; // Color of the glow // Signed Distance Function (SDF) for a rounded rectangle float calculateRoundedRectSDF(vec2 position, vec2 rectSize, float radius) { vec2 adjustedPosition = abs(position) - rectSize + radius; // Adjust for rounded corners return min(max(adjustedPosition.x, adjustedPosition.y), 0.0) + length(max(adjustedPosition, 0.0)) - radius; } // Function to calculate glow intensity based on distance float calculateGlow(float distance, float radius, float intensity) { return pow(radius / distance, intensity); // Glow falls off as distance increases } half4 main(float2 coord) { // Normalize coordinates and aspect ratio float aspectRatio = size.y / size.x; float2 normalizedPosition = coord.xy / size; normalizedPosition.y *= aspectRatio; // Define normalized rectangle size and center float2 normalizedRect = float2(1.0, aspectRatio); float2 normalizedRectCenter = normalizedRect / 2.0; normalizedPosition -= normalizedRectCenter; // Calculate normalized corner radius and distance float normalizedRadius = aspectRatio / 2.0; float distanceToRect = calculateRoundedRectSDF(normalizedPosition, normalizedRectCenter, normalizedRadius); // Get the button's color half4 buttonColor = button.eval(coord); // Inside the rounded rectangle, return the button's original color if (distanceToRect < 0.0) { return buttonColor; } // Outside the rectangle, calculate glow effect float glow = calculateGlow(distanceToRect, glowRadius, glowIntensity); half4 glowEffect = glow * glowColor; // Apply tone mapping to the glow for a natural look glowEffect = 1.0 - exp(-glowEffect); return glowEffect; } """.trimIndent() 結果 ボタンが明滅する後光を出して、関心を引き付けながら車のライトのような雰囲気を作り出せます。 https://youtube.com/shorts/CW1yBgJyDo4?rel=0 2.高度な画像処理を実行する AGSLはリアルタイムの画像操作に優れていて、ダイナミックでインタラクティブなエフェクトを作ることができます。AGSL を使用すれば、GPUアクセラレーターを活かした高速な画像処理エフェクトを作成できます。 カスタムフィルタ: セピア、ピクセル化、ビネットなどアート風なエフェクトを追加します。 ダイナミックブラー: モーションブラーや被写界深度エフェクトなど、リアルタイムでぼかしを適用します。 カラー調整: UI上で明るさ、コントラスト、彩度をダイナミックに調整します。 例:画像の波紋エフェクト アプリに月の画像があると想像してみてください。月が水面に映るような波紋エフェクトを追加して、もっとインタラクティブで関心を引くインターフェースにしたいなと思っているとします。 AGSL シェーダコード: @Language("AGSL") val rippleShader = """ // Uniform variables: inputs provided from the outside uniform float2 size; // The size of the canvas in pixels (width, height) uniform float time; // The elapsed time for animating the ripple effect uniform shader composable; // The shader applied to the composable content being rendered // Main function: calculates the final color at a given fragment (pixel) coordinate half4 main(float2 fragCoord) { // Scale factor based on the canvas width for normalization float scale = 1 / size.x; // Normalize fragment coordinates float2 scaledCoord = fragCoord * scale; // Calculate the center of the canvas in normalized coordinates float2 center = size * 0.5 * scale; // Calculate the distance from the current fragment to the center float dist = distance(scaledCoord, center); // Calculate the direction vector from the center to the fragment float2 dir = scaledCoord - center; // Apply a sinusoidal wave based on the distance and time float sin = sin(dist * 70 - time * 6.28); // Offset coordinates by applying the wave effect in the direction of the fragment float2 offset = dir * sin; // Calculate the texture coordinates with the ripple effect applied float2 textCoord = scaledCoord + offset / 30; // Sample the composable shader using the adjusted texture coordinates return composable.eval(textCoord / scale); } """.trimIndent() 結果 このシェーダーがあれば、最小限のパフォーマンスコストでアプリの画像に深みやエレガントさを加えることができます。 https://www.youtube.com/shorts/80QOTzNUHLg?rel=0 3. プロシージャルグラフィックを有効にする プロシージャルグラフィックスは、視覚に訴えるようなインターフェースを静的なアセットに頼ることなく作成するのにピッタリです。 パターンの生成: ストライプ、グリッド、ノイズなどのプロシージャルテクスチャを作成します。 シェイプアニメーション: モーフィングシェイプや移動パターンをデザインします。 3D風エフェクト: 奥行きや遠近感を、実際の3Dレンダリングなしで表現でき ます。 例:アニメーションローディング画面 ローディング画面は単調になりがちですが、AGSLを使うとダイナミックな芸術作品のように変身します。例えば、アプリの読み込み中にきらきらなアニメーション球体を表示して、ユーザーの目を引く演出が作成できます。 AGSL シェーダコード: @Language("AGSL") val lightBallShader = """ uniform float2 size; // The size of the canvas in pixels (width, height) uniform float time; // The elapsed time for animating the light effect uniform shader composable; // Shader for the composable content half4 main(float2 fragCoord){ // Initialize output color float4 o = float4(0.0); // Normalize coordinates relative to the canvas center float2 u = fragCoord.xy * 2.0 - size.xy; float2 s = u / size.y; //ライトボールエフェクトを計算するループ for (float i = 0.0; i < 180.0; i++) { float a = i / 90.0 - 1.0; // Calculate a normalized angle float sqrtTerm = sqrt(1.0 - a * a); // Circular boundary constraint float2 p = cos(i * 2.4 + time + float2(0.0, 11.0)) * sqrtTerm; // Oscillation term // Compute position and adjust with distortion float2 c = s + float2(p.x, a) / (p.y + 2.0); // Calculate the distance factor (denominator) float denom = dot(c, c); // Add light intensity with color variation float4 cosTerm = cos(i + float4(0.0, 2.0, 4.0, 0.0)) + 1.0; o += cosTerm / denom * (1.0 - p.y) / 30000.0; } // Return final color with an alpha of 1.0 return half4(o.rgb, 1.0); } """.trimIndent() 結果 このシェーダーでアプリのローディング画面が未来的でスタイリッシュになり、待ち時間が短く感じられてより楽しめるものになります。 https://youtube.com/shorts/pUTU0KRmFek?rel=0 4. アプリのパフォーマンスを向上させる AGSLはパフォーマンス重視な場面で力を発揮し、レンダリングタスクをGPUに任せることで、スムーズで効率的なアニメーションを実現します。 効率的なアニメーション: 複雑なリアルタイムエフェクトをスムーズに処理します。 バッテリー最適化 消費電力を最小限に抑えつつ、目を見張るようなエフェクトを実現します。 例:マップビューでの天気アニメーション プロダクトマネージャーから、マップビューに天気アニメーションのオーバーレイを追加するように頼まれたとします。従来の方法はパフォーマンス集約型ですが、GSLはCPUオーバーヘッドを最小限にしつつ、Androidの最適化したレンダリングパイプラインを活かして、効率よくGPUレンダリングができます。 雨のAGSLシェーダーコード: @Language("AGSL") val rainShader = """ uniform float time; // The elapsed time for animating the rain uniform float2 size; // The size of the canvas in pixels (width, height) uniform shader composable; // Shader for the composable content // Generate a pseudo-random number based on input float random(float st) { return fract(sin(st * 12.9898) * 43758.5453123); } half4 main(float2 fragCoord) { // Normalize fragment coordinates to the [0, 1] range float2 uv = fragCoord / size; // Rain parameters float speed = 1.0; // Speed of raindrops float t = time * speed; // Time-adjusted factor for animation float density = 200.0; // Number of rain "drops" per unit area float length = 0.1; // Length of a raindrop float angle = radians(30.0); // Angle of the rain (in degrees) float slope = tan(angle); // Slope of the rain's trajectory // Compute grid position and animated raindrop position float gridPosX = floor(uv.x * density); float2 pos = -float2(uv.x * density + t * slope, fract(uv.y - t)); // Calculate the raindrop visibility at this fragment float drop = smoothstep(length, 0.0, fract(pos.y + random(gridPosX))); // Background and rain colors half4 bgColor = half4(0.0, 0.0, 0.0, 0.0); // Black transparent background half4 rainColor = half4(0.8, 0.8, 1.0, 1.0); // Light blue raindrop color // Blend the background and raindrop color based on drop visibility half4 color = mix(bgColor, rainColor, drop); return color; // Output the final color for the fragment } """.trimIndent() 結果 このシェーダーは雨をリアルに再現できる上に、雲や雪にも対応でき(後者2つのコードはここでは省略します)、ローエンドのデバイスでもスムーズに動作します。 https://youtube.com/shorts/l63i3mQ_n2Y?rel=0 さいごに 見た目が素晴らしく、高いインタラクティブ性と最適なパフォーマンスを備えたエフェクトがAndroid アプリで作成できる。AGSLはそんな多機能なツールです。UIコンポーネントの強化、高度な画像処理、プロシージャルグラフィックスの生成、アニメーションが多い場面でのパフォーマンス向上など、AGSLを使えばアプリが一段と際立ちます。 AGSLがあれば、可能性は創造力次第です。さっそく試して、アプリに命を吹き込みましょう!
アバター
This article is the entry for day 14 in the KINTO Technologies Advent Calendar 2024 🎅🎄 Hi, I’m Nakanishi from the Manabi-No-Michi-No-Eki (Learning Roadside Station) team. This year, the Learning Roadside Station project was officially initiated and subsequently reorganized into a team. As part of our initiatives, we also run an in-house podcast, and for this year’s Advent Calendar, we’d like to share more about it. What is the Manabi-No-Michi-No-Eki (Learning Roadside Station)? The Manabi-No-Michi-No-Eki (Learning Roadside Station) project aims to enhance the accessibility and effectiveness of the frequently held in-house study sessions. The initiative is driven by dedicated volunteers within the company, aiming to support study sessions and promote a culture of knowledge-sharing throughout the organization. Reading Session to understand the General Managers’ Meeting Minutes The Learning Roadside Station Podcast showcases interviews with individuals who organize study groups at KTC. This segment is called "A Peek into the Study Session Next Door". This time, we will be speaking with Omori-san and Takagi-san, the hosts of the "Reading Session for General Manager’s Meeting Minutes." Interview Hoka-san: Could you start by introducing yourselves? Omori-san: Yes, I’m Omori from Corporate IT. I usually handle PC kitting at the Muromachi 16th Floor Center. I am a member of the Asset Platform Team, responsible for managing work devices and SaaS account licenses. I am also in charge of preparing and collecting devices for new employees. Takagi-san: Yes, I’m Takagi, also part of the Corporate IT team. I work in the Tech Service Team and commute between Jimbocho and Muromachi. As a member of the Service Desk, I manage internal inquiries and offer problem resolution. Specifically, I am responsible for managing Self-Service Management (GSM) and OPIT Management. Hoka-san: Thank you. Could you share how the "Reading Session for General Managers’ Meeting Minutes" began? Omori-san: It all began with Kinchan in Nagoya, who initially proposed the idea. In Corporate IT, we rarely have direct access to frontline business information. Therefore, we started this session to enhance productivity by sharing the General Managers’ Meeting minutes, discussing them, and learning from one another. Takagi-san: I feel the same way too. By reviewing the meeting minutes, we can anticipate business trends and apply that insight to our work. For example, we can proactively prepare before official requests are made, enhancing work efficiency. Hoka-san: What kind of impact has this session had so far? Omori-san: Although it may not always directly relate to our daily tasks, reviewing the meeting minutes helps us understand the background of projects, allowing us to make better and more informed proposals. This improves the quality of operations. Takagi-san: I completely agree. Reading the meeting minutes allows us to grasp business movements and respond more efficiently to unexpected requests. The insights gained from the meeting minutes are invaluable for making informed business decisions and proposals. Hoka-san: What are your future plans for this session? Takagi-san: I’d like this session to also serve as a platform for facilitators to challenge themselves and enhance their skills. We aim to encourage more new participants, creating a lively and engaging learning environment. Omori-san: I completely agree. To deepen business understanding, I’d like to continue reading and discussing the meeting minutes, enabling all participants to apply this knowledge to their work. We also aim to gather and organize information, allowing people to catch up later if needed. Hoka-san: Finally, is there any message you’d like to share with the listeners? Omori-san: This session is open to everyone. If you’re interested, please feel free to join us! Let's enhance our business knowledge and elevate work quality together. Takagi-san: We’re planning to create a Slack channel to share announcements. Participating in this session will help deepen your understanding of the business and enhance your work. Hoka-san: Thank you both for your time today. This time, we explored the details of the Reading Session for the General Managers' Meeting Minutes, including its operational background and future prospects. We hope you look forward to the next study session!
アバター
登壇レポート 岡(okapi) 2025年2月20日にAppiumの勉強会「 Appium Meetup Tokyo 」に登壇してきました。 発表したスライドは、「効率的なアプリ自動化のためのガイドラインと実践方法」( https://speakerdeck.com/kintotechdev/xiao-lu-de-naapurizi-dong-hua-notamenogaidoraintoshi-jian-fang-fa ) で確認できますので、ぜひご覧ください。 Appium Meetup Tokyoを行った背景 弊社の新規開発アプリの不具合発生率は、Webに比べると高い傾向(10倍近いことも) ↓ テスト負荷が高いので、アプリのテスト自動化をしなくては ↓ Appiumで自動化作業開始 ↓ 情報を探してもAppiumについて学ぶ場所が見つからない ↓ であれば自分達で開催してしまおう!! ということで登壇してきました。 発表資料で意識した点 ①  弊社では、QA業務に理解のある開発エンジニアがとても多いので、前向きに作業を進められやすく、いっしょに品質を高めて、作り上げられるといった利点があります。 その点をアピールするため、QAと開発チームで協力して「自動化しやすいアプリの作成」をしている点を主として説明しました。 ②  質問もしやすいように、Appiumが分からない人でも理解できる資料を意識して作成していたため、 オンラインで7件、オフラインで7件の計14件の質問をいただき、盛り上がって嬉しかったです。 質問例 こちらにKINTOかんたん申し込みアプリログインの各ステップについて教えます。 Q1 : 開発とQAと協力している点で、IDを振る際に同じような要素がたくさんあるページではどのようにIDを振っていますか? 例えば、先ほどスライドに映っていたページで車種がたくさん表示される画面ではどのようにしていますか? A1: IDはオブジェクトや画面名ごとに定義しており、必ず一意になるように設定しています。 車種が表示されるページでは、開発チームで使用している車種情報の仕様書に沿ってIDを振っています。 Q2: IDは、iOSとAndroidで別で定義することもあるのでしょうか? A2: Appiumで使用するIDについては、全てiOSとAndroidで共通のIDを設定しています。 登壇した感想 外部での発表は初めてでしたが、事前に社内の合同勉強会で練習を行い、本発表前にも十分な準備をしたため、問題なく登壇することができました。私たちのように外部での発表経験がない方も、まずは身近な人や社内で練習を重ね、その後に外部で発表すると、発表が苦手な方でも取り組みやすいので、おすすめです。 練習風景です ↓ 今後に向けた意気込み 日本国内にはAppiumを学ぶ場所がほとんど存在しないため、我々のAppiumに関する知見やノウハウを引き続き「Appium Meetup Tokyo」( https://autifyjapan.connpass.com/event/342867/ ) で発信し、良いコミュニティを築けるように頑張ります! 登壇レポート パンヌウェイ (Pann Nu Wai) 今回はAppiumの取り組みについてお話ししたいと思います。 Appium Meetup Tokyoを行った背景 社内でテスト自動化を主に行っているパンヌウェイです。 Appium の取り組みを広げるために様々な活動を行っていますが、その一環として社内の合同勉強会で話をさせてもらいました。 この機会を通じて、私たちのチームがどのようにしてテスト自動化を進めているのかを共有しました。そして合同勉強会での発表内容をブラッシュアップし、今回はAppium Meetup Tokyoで発表することができました。 発表資料で意識した点 チーム内でAppium を利用して、モバイルアプリ自動テストできたことを4つのステップとして説明しました。 自動化テストエンジニアではない方向けにも、テスト仕様書を読めばソースコードも把握できるように、仕様書作成から自動化テストのパフォーマンスまで含めて記述しています。 登壇した感想 外国人として日本語の発音は少し苦手ですが、登壇に向けて、何度も練習を重ねた結果、当日の登壇は無事に成功しました。 多くの方々からフィードバックをいただき、大変有意義な時間を過ごすことができました。 練習風景です ↓ 今後に向けた意気込み また、私は外部にもAppiumに関する情報を発信していきたいと考えています。 社内の取り組みを外部に広めることで、多くの人々と知識や経験を共有し、さらにテスト自動化の分野を発展させていきたいと思っています。 次回の登壇に向けて、さらに内容を充実させ、より多くの価値を提供できるように頑張ります。これからもどうぞよろしくお願いいたします。 最後に 本記事は「Appium Meetup Tokyo」の登壇レポートですが、 第1回イベントの「開催レポート」( https://blog.kinto-technologies.com/posts/2025-02-20-Appium-Meetup-Tokyo-開催レポート/ ) も執筆していますので、併せてお読みいただけると嬉しいです。
アバター
Introduction Hello, I'm Kuwahara from the SCoE Group at the Osaka Tech Lab in KINTO Technologies (KTC). SCoE stands for Security Center of Excellence, a term that might not be widely recognized yet. In April 2024, KTC restructured the CCoE team into the SCoE group. To learn more about the SCoE Group, check out the article SCoE Group: Leading the Evolution of Cloud Security . For more details about the Osaka Tech Lab, KTC's Kansai base, visit Introduction to Osaka Tech Lab . The mission of the SCoE Group is to "implement real-time guardrail monitoring and improvement activities" across AWS, Google Cloud, and Azure environments. These activities focus on three key areas: Preventing security risks Continuously monitoring and analyzing security risks Responding promptly when a security risk arises In this post, I’ll provide a closer look at the work of KTC’s cloud security engineers. A Day in the Life of a Cloud Security Engineer To provide a clearer picture, I’d like to walk you through a typical day for a cloud security engineer (please note that due to the sensitive nature of the field, some aspects cannot be shared in detail.) Checking alerts The first thing we do in the morning is check whether there are any high-risk alerts. We use CSPM (Cloud Security Posture Management) and threat detection services to understand the security status of the entire cloud environment and check whether there are any alerts that require immediate action. KTC uses services such as AWS Security Hub , Amazon GuardDuty , and Sysdig Secure for CSPM and threat detection services. In checking alerts, the following are considered: Alert prioritization : Alerts are classified and prioritized based on their severity and scope of impact. Alert Triage : Identify the cause of an alert and take necessary action. Management of false positives (”over-detection”) : Security tools can sometimes produce false positives. This may cause activities that are actually not problematic to be reported as alerts. A cloud security engineer also manages these as part of alert handling. Identification of operations required for work : This is related to managing false positives, but some alerts may be triggered by operations required for work. For example, this includes maintenance tasks regularly performed by the person in charge of each product. A cloud security engineer identifies these activities and responds to them appropriately. This allows you to understand the security status of your entire cloud environment and check for any alerts that require immediate action. Information Gathering and Catch-up Next, a cloud security engineer catches up on cybersecurity trends and the latest information on cloud services such as AWS. The following are used as information sources. X (formerly Twitter) : Cloud security engineers follow cybersecurity experts and industry leaders on X (formerly Twitter). They share the latest threat information and countermeasures, allowing for real-time information gathering. Official news and blogs from AWS and Google Cloud : Official information from cloud service providers is an important source of information about new feature releases and security updates. This helps cloud security engineers stay informed about new service launches, the latest technological trends, and best practices. Other news sites : By regularly checking news sites and blogs focused on cybersecurity, cloud security engineers can understand trends across the industry and catch up with the latest threats and attack methods. Threat Detection with SIEM KTC uses Splunk Cloud Platform as its SIEM (Security Information and Event Management). It aggregates security-related logs in Splunk and provides an environment for cross-sectional analysis and monitoring of logs. That day, I discovered a suspicious log on the Splunk dashboard. The log stated: "Attempted to create a resource for a service restricted by Google Cloud's organizational policy, but the operation failed." We were able to determine the general activity from the information in the dashboard for Google Cloud Audit logs that we created using Splunk, but we will investigate in more detail. First, we identify users who are repeatedly retrying to create resources for services restricted by Google Cloud organizational policies. User information is masked before being logged in Google Cloud's audit log (audicy_denied), so users cannot be identified from this log alone. We identify users by analyzing cross-sectional logs together with terminal logs, etc. We created a query to use for this analysis and identified the users in question. Next, we create queries to further analyze the behavior of the identified users and analyze the logs. It appears that the identified user is attempting to use the AI/ML service, Vertex AI . Since no requests for the use of Compute services were made in the relevant project, the use of Compute services is restricted by the organizational policy. When using Notebook with Vertex AI, a Compute Engine (GCE) instance is launched. Therefore, this is a violation of the organizational policy. Ultimately, we determined that this was a harmless activity, citing a omission of information about the services to be used when applying for a new Google Cloud project. Cost optimization for Cloud Vendor-Native Security Services Security services provided by cloud vendors are charged on a pay-as-you-go billing basis, so as cloud resources increase, the security service charge also increases. Our idea of ​​"security" is "security for business," and "security that hinders business" is unacceptable. Therefore, the "balance between security and cost" is also an important point, and cost optimization of security services is also included in the SCoE Group's mission. On that day, I investigated the potential for cost optimization of several security services that accounted for a high proportion of the overall cost. The graph above shows the services that were targeted in the analysis this time. Among them, I paid particular attention to AWS Config . (Specific item names have been masked.) AWS Config is a service for auditing, evaluating, and recording the configuration of AWS resources. Until November 2023, the only recording method for AWS Config was “a method that records every time a change in resource configuration has occurred." This method is called "recording frequency: continuous recording." In other words, if the frequency of resource changes is high, the number of records in AWS Config increases, and the usage fee increases proportionally. As an example, let's look at network-related events. The data below is a graph showing the number of VPC and network-related configuration changes in AWS account over a one-week period. You can see that CreateNetworkInterface and DeleteNetworkInterface , which correspond to the creation and deletion of Elastic Network Interfaces (ENIs), occur approximately 17,000 times per day. KTC is using Fargate, an Amazon Elastic Container Service (ECS). For this reason, the creation/ deletion of ENI occur each time an ECS task (container) starts/stops. Under these circumstances, if you have AWS Config set to “Recording frequency: Continuous recording," the number of AWS Config records associated with these changes will be huge, and the amount you are charged will increase accordingly. However, starting in November 2023, a new feature that allows you to select "Recording Frequency: Daily Recording" was added to AWS Config. This new feature allows us to adjust the recording frequency for each resource type, providing the flexibility in balancing security and cost. In general, this setting is believed to help optimize the cost of using AWS Config. However, this is only the case if you are not using AWS Control Tower . AWS Control Tower is a service for centrally managing the governance of multiple AWS accounts. If you use AWS Control Tower to manage AWS Config in your AWS account, check Guidance for creating and modifying AWS Control Tower resources . Please pay attention to the following sentence at the beginning of the guidance: Do not modify or delete any resources created by AWS Control Tower, including resources in the management account, in the shared accounts, and in member accounts. If you modify these resources, you may be required to update your landing zone or re-register an OU, and modification can result in inaccurate compliance reporting. As this statement indicates, modifying or deleting resources created by AWS Control Tower by any means other than AWS Control Tower is not recommended . Specifically, as of December 2024, AWS Control Tower does not provide the feature to modify the frequency of AWS Config recording. Therefore, changing the recording frequency of AWS Config under AWS Control Tower management is not recommended, and the official documentation also states that it may cause problems. Taking into account the content of the official documentation, I also contacted AWS Support just to be sure and received the same opinion. In this way, when “a setting itself is possible but poses the risk of problems or is not recommended,” it becomes difficult to maintain stable cloud security and governance. The result could be "security that hinders business" . In light of the above, we decided to postpone changing the recording frequency of AWS Config for now and submitted an improvement request to AWS Support. I believe that proposing such an improvement request to enhance the convenience of cloud services is a modest yet very important initiative. Preparation for a Security Study Session Finally, I created presentation materials for our regularly held in-house security and privacy study sessions. The SCoE Group has formulated "Cloud Security Guidelines" that summarize the key points of cloud security for"requirements definition," "design," and "development" phases of product development, and has made them available in-house. This set of guidelines is an important resource for ensuring compliance with the security policies of the group companies to which KTC belongs, minimizing security risks, and supporting efficient development. I host study sessions to raise awareness and enhance understanding of the Cloud Security Guidelines. In the study sessions, I provide detailed explanations of each item in the guidelines, while also incorporating specific cases and practical advice. On that day, I created presentation materials on IAM (Identity and Access Management) best practices, ensuring the materials were concise enough to fit within a 20-minute timeframe. Conclusion I’ve shared a glimpse into a day in the life of a cloud security engineer at KTC. While this is just a snapshot, I hope it helped you gain a better understanding of what the role entails. The SCoE Group is currently looking for new team members. Whether you have hands-on experience in cloud security or are simply passionate about the field, we’d love to hear from you. Feel free to reach out to us. For more information, please check here .
アバター
はじめに こんにちは!KINTOテクノロジーズでデザイナーをしている桃井( @momoitter )です。 クリエイティブ室に所属しており、 コーポレートサイト や くもびぃ (KINTO公式マスコットキャラクター)関連サイトなどの制作に、最先端のWEB表現を取り入れながら携わっています。 2024年11月に「超本部会」という会社のイベントが開催され、私はそのイベントのオープニングムービーの作成を担当しました。その冒頭のワンシーンで、3つの生成AIを使用し、イベントのスタートを宣言する女性のキャラクターを作成しました。 実際の映像 https://www.youtube.com/watch?v=pVj_UQ_3-tg 「もちろんです」と言っているのは、その直前に「準備はいい?」と問いかけるシーンがあったため。 今回はこちらの喋るオリジナルキャラクター作成の工程や、作成時にどのようなことを考えたかをご紹介します。 独自のキャラクターを作成して、言葉を喋らせたい AIを取り入れて印象に残る映像を手軽に作成したい という方はぜひご覧ください! 背景 イベント全体のクリエイティブを監修するアートディレクターからのオーダーとしては、「 コーポレートサイト のKVで使われている動画を再編集して1分のオープニングムービーを作る」というものでした。 ですが、ただ再編集するだけでは社員からすると既視感があるので、華やかにイベントのスタートを切れるように、会場の空気を惹きつける何かが必要と感じていました。 そこで目をつけたのが、弊社のSlack内にある「しぇるぱ」というAIチャットボット。 AIを駆使し、サプライズとしてそのしぇるぱを擬人化した映像を作れば、注目があつまるのではないかと考えました。 使用したAI 喋るオリジナルキャラクター作成にあたり、下記3つのAIを使用しました。 Adobe Firefly(キャラ画像生成) TTSMaker(テキスト読み上げ) Runway(キャラを喋らせる) ここから先は、これらのAIを使用しどのように動画を生成したかをご紹介します。 1.キャラ画像生成 Adobe Firefly Adobeが提供している画像生成AIツール。 Adobe Stockなど著作権フリーの画像を学習しているので、著作権の侵害の心配なく使用できます。 一般的に有名な画像生成AIでは、著作権フリーを謳っているものも多いですが、アニメのキャラに似たものが生成できてしまったり実際はグレーなものが多いものの、社内イベントであるとはいえ著作権はしっかりクリアしておきたかったため、そのような問題を気にせず使用できるこちらのAIをセレクトしました。 Adobe Firefly 生成のイメージ 今回のキャラクターの元ネタになった「しぇるぱ」は、弊社のSlackでこのようなアイコンで表示されています。 このアイコンから 「しぇるぱ」の「ぱ」→女性らしさを感じる音の響き アイコンがピンク→ピンクの髪の毛 AIのチャットボット→スマートでデジタル感のある雰囲気 などキャラクターのイメージを膨らませていきました。 画面上の操作 Fireflyを開くとこのような画面になっています。 大まかにいうと、下の入力エリアに画像を生成するためのプロンプトを打ち込み、左側のメニューで縦横比・構成・スタイル・トーンなどの調整を行います。 今回は試行錯誤の末、「3dのキャラクター、女性、ピンクの髪、背景は白、上半身、白くてシンプルでデジタルな服装、正面を向く」というプロンプトで生成していきました。 量産 プロンプトがある程度固まってくると、良い生成結果に出会うためには運次第でもあるので、100~200枚をひたすら生成しました。 選定 イベントの始まりをフレッシュにスタートさせたかったので、「AIオペレーター」的なオフィシャル感、安心感があるキャラクターが理想でした。 そのため 幼すぎる 服が奇抜 顔が怖い など、イメージから遠いものは除外していきました。 決定した画像 細かい選定作業を経て、最終的にクールでありながら親しみも感じられるこちらの画像に決定しました。 2.テキスト読み上げ TTSMaker 打ち込んだテキストを音声に変換するAI音声ジェネレーター。 このようなAI準拠のテキスト読み上げサービスは多数存在するのですが、有料だったり無料でもクレジット表記をしないといけないものが多かったので、無料かつクレジット表記無しで利用できるこちらのツールを使用しました。 TTSMaker 画面上の操作 TTSMakerを開くとこのような画面になっています。 手順としては 言語を選択 読み上げさせたいテキストを入力 サンプル音声を視聴しながら声色を選択 しゃべる速さ、声の高さなど、詳細の設定 変換 になります。 今回はAIオペレーター的な、オフィシャル感、安心感がある声が理想だったので、サンプル音声を聴き比べながら、「406 - yuki つみゆき-🇯🇵 japanese female」の声色を選択し、「もちろんです。超本部会を始めます。」というテキストを読み上げてもらいました。 実際の音声 https://www.youtube.com/watch?v=r4Zw2by669I 3.キャラを喋らせる Runway AIを活用して簡単に高品質な動画を生成・編集できるツール。 「Lip Sync Video」という、人物やキャラクターの静止画を、音声に合わせて喋らせる機能があったため、このツールを使用しました。 画面上の操作 1.「Generative Audio」内、「Lip Sync Video」を開き、先ほど生成したキャラクターの画像をドラッグ&ドロップ 2.キャラクターの画像の顔の範囲の認識が合っているか確認し、問題なければ「upload audio」をクリック 3.先程生成した音声をドラッグ&ドロップし、「Generate」をクリック 生成された映像 https://www.youtube.com/watch?v=usY4yB9Z1YA 音声に合わせて喋るキャラクターの映像が生成されました。 応用編 その1 https://www.youtube.com/watch?v=raYsnhwZONo アップロードする音声を曲にすると、キャラクターに歌わせることもできます。 応用編 その2 https://www.youtube.com/watch?v=3edVClgoLug このように人物の静止画を喋らせることも可能です。 先日の社内勉強会(東京開催)で、急遽登壇者が大阪からの出張ができなくなってしまったため、静止画とボイスメモを用意してもらい、このようなAI生成映像で発表を行いました。 4.仕上げ 喋るキャラクター映像の作成方法としては以上で、ここからはプラスαです。 実は「AIでキャラクターを喋らせる」というところまでは、上記で紹介したAIを使えば誰でも作ることができてしまうぐらい簡単です。 ただ、私はデザイナーとして「クリエイティブ室」に所属しているので、クリエイターとしての意地といいますか、他の部署でも作れるようなものにはしたくなかったため、最後にIllustratorで作成した「しぇるぱ」のバルーン型3DCGを、After Effectsを使用しふわふわ浮遊するモーションをつけ、先ほど生成した映像と合成することで画としての完成度を上げ、「クリエイターならでは」という価値を加えました。 完成した映像 https://www.youtube.com/watch?v=pVj_UQ_3-tg 最後に一手間加えることで、ただのキャラクターが喋る映像が、一気にグラフィカルな表現が加わった映像へと進化しました。 まとめ 会場で投影された際の様子 クリエイティブ系の生成AIにはそれぞれ特性があり、できることには限りがあります。 それらの特性を理解し組み合わせることで、今回は単一のツールでは作成できないクオリティの映像を生み出すことができました。 イベント当日はこのキャラクターが大きいスクリーンで投影され、ありがたいことに あれすごかったですね、どうやって作ったんですか!? クオリティ高くて、外注してるのかと思いました。 など社員から声をかけていただくことも多く、狙ったインパクトを残すことができたかなと思います。 それぞれのAIツールの操作としてはとても簡単で、非クリエイターでも使用できるようなものです。 アイデアさえあればこのような印象に残る映像を作成できるので、この記事を見て気になった方はぜひ実践してみてください! 最後までご覧いただき、ありがとうございました。
アバター
This article is part of Day 13 of the KINTO Technologies Advent Calendar 2024 Hi, I’m Nakanishi from Learning Roadside Station. This year, the Learning Roadside Station project was officially launched and structured as an organization. As part of our initiatives, we also run an in-house podcast, and for this year’s Advent Calendar, we’d like to share more about it. What is "Learning Roadside Station"? “Learning Roadside Station” is a project aimed at making in-house study sessions, which are frequently held, more accessible and effective. The initiative is led by passionate volunteers within the company, with the goal of supporting study sessions and fostering a culture of knowledge sharing across the organization. Factory Automotive Study Group The Learning Roadside Station Podcast features interviews with employees who organize study groups within the company. This segment is called "Surprised! My Neighbor's Study Group." This time, we interviewed Miura-san from the Factory Team. Interview Hoka-san: Thank you, Miura-san. Could you introduce yourself and tell us about your role? Miura-san: Thank you very much. Officially, I am the Team Leader of the KINTO Factory Team, part of the Project Promotion Group in the Project Development Division. The Factory Team develops products not only with KTC but also in collaboration with KINTO's General Planning Department. Hoka-san: For this interview, we’d like to focus on the fact that you are studying automobiles within the Factory Team. Could you tell us more about that? Miura-san: First of all, I want everyone to enjoy their work. I believe that understanding the products we sell makes the work more engaging and rewarding. Since we handle automobile-related products, having technical knowledge about cars makes the development process even more enjoyable. Having a background in the automotive industry, I felt that sharing my knowledge would help us make better proposals, which is why I started the study group. Hoka-san: How did you decide to share that knowledge? Miura-san: We hold online study sessions. At first, session was for an hour, but now we've reduced it to 30 minutes and hold it once a month. Recently, we’ve been discussing the evolution of in-car networks and navigation systems. Hoka-san: How have the participants responded to the study sessions? Miura-san: Based on survey feedback, many participants found the information fresh and valuable since they wouldn’t normally come across such details in their daily work. The number of participants has not decreased, and they are listening with interest. Hoka-san: How long have you been running these sessions? Miura-san: At first, the sessions were one hour long, but now it’s 30 minutes and we hold them once a month. Hoka-san: What topics have you covered in your study sessions so far? Miura-san: Recently, we talked about the evolution of in-car networks and navigation systems. We also discussed how the automotive industry is evolving based on insights from CES in Las Vegas. Hoka-san: Did you attend CES in Las Vegas? Miura-san: I wasn’t able to attend in person, but I shared my own thoughts based on the exhibition content that are open to the public on the Internet. Hoka-san: Do you have any upcoming study sessions planned? Miura-san: Next, we’ll be discussing the vehicle installation process. Using dealer manuals, we’ll explore how different parts are assembled and installed in cars. Hoka-san: What should I do if I want to join the study sessions? Miura-san: Our sessions are conducted online, so anyone interested is free to participate. We’re also working on creating a system to visualize and organize study session information for easier access. Hoka-san: What inspired you to start the study group? Miura-san: It started as a team-building initiative. Our group transitioned from a project team to a formal team, and I wanted to ensure that everyone had a deeper understanding of automobiles. Hoka-san: Lastly, do you have a message for your colleagues? Miura-san: Since we work in the automotive industry, I believe that deepening our knowledge of cars makes our work more enjoyable and meaningful. If you’re interested, I encourage you to join our study sessions! Hoka-san: Thank you very much, Miura-san. I hope you will continue to share the fascinating world of automobiles with your colleagues through these study sessions. This time, we shared insights about the Factory Team, the background to its operations, and future prospects. Please look forward to the next study session!
アバター
自己紹介 KINTOテクノロジーズにて主にプロダクトセキュリティ、セキュリティガバナンス業務に携わっている森野です。 RB大宮アルディージャとちいかわが好きです。 ここ10年はサイバーセキュリティ、情報セキュリティに関する仕事に携わっています。それ以前はWebアプリケーションエンジニアとしてWeb効果測定システムやECサイトのフロントエンドシステムの開発、運用に長く携わっていました。 本記事では当社のVDP(Vulnerability Disclosure Program)カイゼン活動について紹介させて頂きます。 VDP(Vulnerability Disclosure Program)とは 企業や組織が外部のセキュリティ研究者やホワイトハッカーから脆弱性の報告を受け取るための制度です。 2023年10月に楽天グループが公開サーバーに「security.txt」を配置しVDPを開始した事で世間の認知度が向上しました。 参考:楽天が公開サーバーにテキスト設置、セキュリティー向上に役立つ「security.txt」 security.txtとは 2022年4月に「RFC 9116:A File Format to Aid in Security Vulnerability Disclosure」として企業や組織が脆弱性の開示方法を説明しセキュリティ研究者などが発見した脆弱性を報告しやすくするために定義されたものです。 2023年11月に当社もsecurity.txtを配置しました。 security.txt設置後の反応 報奨金の有無を確認する問い合わせが殆どで(当社は報奨金制度は提供していない)、KINTO/KTCの脆弱性情報を持っているのか否か不明な報告が多く寄せられました。 ホワイトハッカーからの報告キタ━(゚∀゚)━! 2024年8月に当社サービスに存在する脆弱性の報告がありました。私たちのグループで報告内容を検証した結果、事実であることが確認できたため開発グループに依頼して脆弱性を修正しました。 VDPのカイゼン VDPの意義を実感できた一方、下記の課題感があったため、Issue Hunt社が提供しているVDPサービスの活用を2024年11月から開始しました。 懸賞金の有無などVDPガイドラインの提示 報告対象となるWebサービスやアプリケーションの提示 報告テンプレートの提示 開始から2025年3月7日現在、6件の報告があり内2件は対応が必要な脆弱性と判断して修正を行いました。予想以上の成果に正直驚いています。 Issue Hunt社のサイトに導入事例として当社が紹介されているので宜しければそちらもご覧ください。 セキュリティ向上の新常識!車サブスク業界におけるVDP導入の成功事例 P3NFEST Bug Bounty 2025 Winterへのプログラム提供 前述の通り、当社では報奨金制度は提供していません。 しかし、報奨金制度の効果検証および将来のインターネットの安全を担う学生を応援することを目的にIssue Hunt社主催の学生向けバグバウンティプログラムにプログラムを提供することにしました。 バグバウンティ対象のサービスは以下の通りです。 KINTOテクノロジーズコーポレートサイト KINTO Tech Blog(当サイト) 開催期間は2025年2月17日(月)から2025年3月31日(月)までです。 詳細はイベント情報ページをご覧ください。学生の皆さんの挑戦をお待ちしております。 P3NFEST Bug Bounty 2025 Winter
アバター
Hello! I am Wada ( @cognac_n ), a Generative AI Evangelist at KINTO Technologies (KTC) as part of the Generative AI Utilization(PJT). At KTC, the adoption of generative AI is advancing across various areas. For example, @ card @ card @ card :::details other KTC Tech Blogs on generated AI @ card @ card @ card @ card @ card ::: Both engineers and non-engineers are utilizing generative AI in ways that fit their roles and tasks. The Generative AI PJT has been working towards the vision of becoming a "company where using generative AI is second nature for everyone". This time, we’d like to introduce our efforts. 1. Introduction of Generative AI Utilization PJT This initiative was launched in January 2024 to promote the use of generative AI within the company. Generation AI Utilization PJT currently has three main functions. Three key functions of the PJT Functions of the Generative AI Utilization PJT These functions are not independent but are part of a continuous cycle aimed at accelerating the adoption of generative AI in various scenarios: Generating innovative ideas Assessing feasibility Implementing and delivering solutions Expanding through case study deployment The goal is to speed up this cycle, ensuring generative AI is utilized effectively in every scenario. Generate AI Utilization PJT functions and cycles In this article, we will introduce our activities with a focus on education and training . 2. The Basic Concept of the Education and Training System To realize the vision of "a company where everyone naturally utilizes generative AI", we have adopted three key principles: Generative AI is not just for specialists There are “optimal utilization levels” based on each role Emphasis on step-by-step learning, from basics to advanced expertise Basic idea of the training system At KTC, multiple instructors conduct training sessions on a variety of topics. In addition, the training participants include both engineers and non-engineers, covering a wide range of roles and responsibilities. To ensure consistently high-quality training, even in such a diverse environment, we established a shared understanding of the fundamental principles. This approach wasn’t predetermined from the start; instead, it gradually developed through ongoing refinements driven by internal feedback. 3. Training System Implementation Based on these three core principles, we have developed a structured, step-by-step training program. Training Name Target Audience Content Beginner All employees Basic knowledge of Generative AI and Prompt Engineering. The foundational first step for everything Case Study All employees Introduction of internal and external use cases. Develop the ability to take best practices and adapt them independently Improved Office Productivity Selected employees from each department (Ambassador system) Master generative AI as a tool to create business value. Drive business process transformation with AI integration. Become an in-house advocate and evangelist for AI utilization. Generalist People involved in system development Learn key aspects of system development using generative AI. Develop the ability to assess technology, create value, and validate outcomes Engineering The engineer responsible for implementation Build practical implementation skills for system development using generative AI. Gain hands-on knowledge and experience to effectively deliver value. Each journey uniquely defines the target level of generative AI utilization. 4. Emerging Value How engineers are changing Proposing the addition of generative AI features to existing systems Planning and pitching new services utilizing generative AI Independently developing AI-powered tools to improve work efficiency Advanced utilization of generative AI tools such as GitHub Copilot How non-engineers are changing Active use of generated AI in day-to-day operations Improved technical communication with AI support Taking on the challenge of developing simple tools As I introduced the blog at the beginning, employees who have undergone training are now actively leveraging generative AI within their roles and responsibilities. From daily tasks and communication to system development, generative AI is driving efficiency improvements and adding value across various scenarios. The types of inquiries we receive have also evolved—from "I don’t know what’s possible" to "I tried it! How can I improve it further?" or "I believe we can achieve something like this, can we collaborate?" With growing generative AI literacy, employees are no longer hesitant to "try first", and they are developing an intuitive sense of "this seems possible—and valuable". 5. Future Prospects Generation AI technology is evolving rapidly. KTC/KINTO is beginning to achieve "commonplace AI usage", but there is no definitive goal for what "commonplace" should be. "Aiming to be a company where using generative AI is second nature for everyone". We will continue pushing forward with our initiatives! We Are Hiring! KINTO Technologies is looking for passionate individuals to help drive AI adoption in our business. We’re happy to start with a casual interview. If you’re even slightly interested, feel free to reach out via the link below or through X DMs . We look forward to hearing from you! Great place to stay! @ card Thank you for reading all the way to the end!
アバター
この記事は KINTOテクノロジーズアドベントカレンダー2024 の10日目の記事です🎅🎄 背景 KINTOかんたん申し込みアプリ の開発にあたっては、KMP (Kotlin Multiplatform)を利用して共有コードを実装し、Swift Packageとして公開しました。このアプローチではコードの重複を回避することで、プラットフォーム間でコードを効率的に共有できたり、開発プロセスをシンプルにすることができました。 当社のiOSチームは現在XcodeGenを使用して依存関係を管理しており、KMPコードのインポートは project.yml ファイルに修正を4行加えるだけで簡単に行えます。このような変更例は次のとおりです。 packages: + Shared: + url: https://github.com/[your organization]/private-android-repository + minorVersion: 1.0.0 targets: App: dependencies: + - package: Shared - package: ... ところが、コードがプライベートリポジトリにあるためにいくつかの設定を追加する必要があります。このブログではその手順をまとめて、どのようにプロセスを効率化したかを説明します。 Package.swiftについて KMPコードをSwiftパッケージとして公開する方法を簡単に説明します: KMPコードを .xcframework にコンパイルする。 .xcframework をzipファイルにパッケージ化し、チェックサムを計算する。 GitHubに新しいリリースページを作成し、リリースアセットの一部としてzipファイルをアップロードする。 リリースページからzipファイルのURLを取得する。 URLとチェックサムを基に Package.swift ファイルを生成する。 Package.swift ファイルをコミットし、リリースをマークするgitタグを追加する。 そのgitタグをリリースページに関連付け、GitHubリリースを公式に公開する。 結果として生成される Package.swift ファイルは次のようになります。 // swift-tools-version: 5.10 import PackageDescription let packageName = "Shared" let package = Package( name: packageName, ... targets: [ .binaryTarget( name: packageName, url: "https://api.github.com/repos/[your organization]/private-android-repository/releases/assets/<asset_id>.zip", checksum: "<checksum>" ) ] ) 開発環境の権限設定 URLはプライベートリポジトリに存在するため、権限設定を行わないと次のエラーが発生します。 これを解決するために、2つのオプションを検討します。 1つ目は、 .netrc ファイル、2つ目は Keychainを使います。 オプション1: .netrc ファイルを使用する場合 GitHubの認証情報を .netrc ファイルに保存すると、APIリクエストの認証を簡単に行うことができます。 #例: echo "machine api.github.com login username password ghp_AbCdEf1234567890" >> ~/.netrc echo "machine api.github.com login <Your Github Username> password <Your Personal Access Token>" >> ~/.netrc 素早くできて効果的な方法ではあるものの、トークンがプレーンテキストで保存されるため、セキュリティリスクの恐れがあります。 オプション2: Keychainを使用する トークンをプレーンテキストで保存したくない場合は、 Keychainを使用して資格情報を安全に保存することができます。 Keychain Access.app を開く。 ① ログイン Keychainを選択する。 ②を選択して、新しいPassword項目を作成する。 ダイアログボックスで、次の情報を入力する。 Keychain項目名: https://api.github.com アカウント名: GitHubユーザー名 パスワード: パーソナルアクセストークン このアプローチはより安全で、macOS認証メカニズムと円滑に統合します。 SSHユーザーの場合 上記の手順では、 https プロトコルを使用してiOSリポジトリをクローンしたことを想定しています。この場合、 github.com に対して必要な権限はすでに設定済みです。 しかし、 ssh プロトコルを使用してリポジトリのクローンした場合、 github.com に対する権限が不足し、 resolveDependencies フェーズで権限に関連するエラーが発生する恐れがあります。 これを解決するには、 .netrc ファイルにドメイン github.com のエントリを追加します。 #例: echo "machine github.com login username password ghp_AbCdEf1234567890" >> ~/.netrc echo "machine github.com login <Your Github Username> password <Your Personal Access Token>" >> ~/.netrc または、 Keychain Access を使用して、 https://github.com という名前の項目を追加します。どちらの方法でも、システムに必要な権限があることをしっかりと設定できます。 GitHubのアクション ローカル開発環境の課題を解決した後は、 CI環境の権限への課題にも対応してビルド中の自動化をスムーズにする必要があります。 GitHubアクションでトークンを取得する パーソナルトークンを使用する 簡単なアプローチの1つは、プライベートリポジトリにアクセス可能なパーソナルアクセストークン(PAT)を作成し、Actionsシークレットを介してCI環境に渡すことです。効果的な方法ではありますが、欠点がいくつかあります。 トークンの有効期限 有効期限のあるトークンは定期的な更新が必要で、更新を忘れるとCIが失敗する恐れがあります。 有効期限のないトークンは、長期的なセキュリティリスクを引き起こします。 広範囲にわたる権限 通常、個人アカウントは複数のプライベートリポジトリにアクセスできるため、PATの権限を単一のリポジトリに制限することが困難となってしまいます。 属人化 アカウント所有者がロール異動によってプライベートリポジトリへのアクセスを失うと、CIワークフローが失敗してしまいます。 GitHubアプリを使用する より堅牢なソリューションにはGitHub Appの使用があり、次のようないくつかのメリットがあります。 リポジトリに対するきめ細かい権限 個々のアカウントに依存しない セキュリティを強化する一時的なトークンが使用可能 GitHubアプリの設定 最終的にはGitHub Appを使ってアクセス許可を設定しました。手順は次のとおりです。 組織内にGitHubアプリを作成する。 iOSとAndroid両方のプロジェクトにアプリをインストールし、リポジトリへのアクセスを管理する。 iOSプロジェクトのActionsシークレットでアプリの AppID と Private Key を設定する。 ワークフローにコードを追加して一時的なアクセストークンを取得する。例を紹介します。 steps: - name: create app token uses: actions/create-github-app-token@v1 id: app-token with: app-id: ${{ secrets.APP_ID }} private-key: ${{ secrets.APP_PRIVATE_KEY }} owner: "YourOrgName" - name: set access token for private repository shell: bash env: ACCESS_TOKEN: ${{ steps.app-token.outputs.token }} run: | git config --global url."https://x-access-token:$ACCESS_TOKEN@github.com/".insteadOf "https://github.com/" touch ~/.netrc echo "machine github.com login x-access-token password $ACCESS_TOKEN" >> ~/.netrc echo "machine api.github.com login x-access-token password $ACCESS_TOKEN" >> ~/.netrc GitHub Appを使用することで、CIワークフローの安全性と効率性を確保して、個々のユーザーアカウントへの依存が解消できます。このアプローチでリスクが最小限に抑えられ、チーム間の開発がスムーズになります。
アバター
This article is the entry for day 10 in the KINTO Technologies Advent Calendar 2024 🎅🎄 Mobility Night is a series of study sessions where software engineers, business professionals, researchers, and product managers in the mobility industry can casually gather to share industry-specific insights and challenges. After hosting our initial closed session (#0), we were excited to open the doors for everyone to join in our first public session (#1). This event focused on 'GPS and Location Information' which are fundamental technologies for mobility services. From car navigation and map applications to on-demand transportation, autonomous driving, and smart city infrastructure, precise location tracking is essential for enabling these innovations. During the event, five sessions were held. Each session explored challenges and possibilities from unique perspectives, focusing on GPS and location information technology. This article is written by Nakanishi from the Development Relations Group, who is also involved in planning and organizing the Mobility Night. 1. Exploring New Google Places API Speaker: Numa, KINTO Technologies Google Places API is one of the core features of the mapping platform and is an important interface for efficiently performing nearby searches and retrieving place information. In this session, the latest improvements were introduced, such as the enhancements to the Autocomplete feature, which provides instant suggestions while typing, and the Fields parameter to filter necessary information. Key points: Performance and cost optimization: By specifying Fields, unnecessary data retrieval can be reduced, leading to lower API costs and faster response times. User experience improvement: Providing quick access to the information they want is a significant advantage for users on the move. The enhanced Autocomplete feature reduces search load, refining the overall UX. Future outlook: Currently, the focus is on location information retrieval, but in the future, personalized strategies integrating IoT sensors and behavioral analysis are also expected. https://speakerdeck.com/kintotechdev/exploring-new-google-places-api 2. Location Data Products Created with Driving Data from AI Dashcam Services Speaker: Shimpei Matsuura, GO Inc. Dashcams are commonly perceived as devices for recording accidents, but in this session, they were reinterpreted as "driving data = a platform for turning streets into sensors." AI analysis of video and GPS data highlighted the potential for dynamically updating maps with real-time information on road signs, traffic signals, and construction conditions. Key points: Dynamic map updates: Evolving static maps into a "living information infrastructure" by reflecting changes in road infrastructure almost in real time. Multiple vehicle data integration: By cross-referencing data from different vehicles, temporary signs and construction sites can be detected with high accuracy. Privacy measures: Ensuring that personal information captured in video data is properly anonymized while retaining essential road-related information is crucial for both technology and operations. Future applications: Potential for various business developments, including HD maps for autonomous driving, smart city planning, and the creation of new services. https://speakerdeck.com/pemugi/aidorarekosabisunozou-xing-deta-dezuo-ruwei-zhi-qing-bao-detapurodakuto-wei-zhi-qing-bao-jing-du-xiang-shang-nogong-fu 3. An Overview of Satellite Positioning Technology: Lessons from Using GPS Modules Speaker: Shinya Hiruta, VP of Engineering, Charichari, Inc. Although GPS is widely taken for granted, but in urban environments, there are many practical issues such as radio wave reflection, poor visibility, and uneven number of satellites. In this session, we explored the fundamentals of satellite positioning technology, as well as potential accuracy improvements and countermeasures. Key points: Environment-dependent issues: Location-specific conditions, such as multipath interference in urban areas and satellite signal loss in tunnels, can significantly impact the accuracy. Multi-GNSS utilization: Instead of relying solely on GPS, combining multiple systems such as GLONASS, Galileo, BeiDou, and Michibiki (QZSS) enhances overall accuracy. Hybrid methods: Improved accuracy with complementary technologies such as accelerometers, gyroscopes, Wi-Fi/Bluetooth beacons, and map matching. Basic knowledge as a guiding principle: This understanding will serve as a guiding principle for future product design, quality assurance, and data analysis. 4. Experimenting the Post-Processing Technique for Location Information Correction (Tentative) Speaker: Kensuke Takahara, IoT Team, Luup, Inc. When real-time positioning accuracy is challenging, there is an alternative called "Post-Processing Kinematic (PPK)," which improves accuracy later. PPK is a method of post-processing by combining acquired data with reference station data without using expensive RTK equipment or special communication infrastructure. Key points: Benefits of PPK: Accuracy can be improved at a later date, regardless of real time. In the end, centimeter-level accuracy is achieved while reducing initial investment. Cost Efficiency and Scalability: Flexibility to improve accuracy later as future demand grows. Useful for delivery robots, drones, and shared mobility services. Application Range: PPK proves highly valuable in areas focused on post-analysis, such as map maintenance, advanced driving log, and infrastructure inspection. https://speakerdeck.com/kensuketakahara/hou-chu-li-dewei-zhi-qing-bao-wobu-zheng-suruji-shu-woshi-sitemita 5. Construction of Simulation Logic Before Introduction of On-Demand Bus Service (Tentative) Speaker: Halufy, New Technology Development Office, Advanced Planning Department, TOYOTA Connected Corporation While on-demand transportation is attractive for flexibility, it is not easy to ensure profitability and sustainability. This session introduced pre-implementation simulations to enable precise demand forecasting and operational planning. Key points: Building a sustainable model: Data is used to verify optimal station placement, fleet size, and time-of-day settings without relying on subsidies. Strategic data utilization: By integrating location data with OD data and reservation requests, simulations are conducted to test demand forecasting, pricing strategies, and route optimization. Long-term vision: It will serve as a foundation for improving overall urban transportation efficiency and convenience by integrating with other mobility means and infrastructure. Future Topics and the Prospects for Mobility Night By focusing on GPS and location information, Mobility Night #1 has taken a deep dive into the mobility industry's "current location awareness" technology. Many participants commented, "I never realized how deep the topic of location data could be!" and "It is valuable to hear about everything from the basics to advanced utilization." However, the mobility industry encompasses far more than just GPS and location information. In the future, we also aim to explore areas such as IoT device utilization: Real-time data collection and control from sensors Data analysis: Demand forecasting and advanced operational optimization Product design: Improving UX and maximizing user satisfaction Quality assurance: Ensuring reliability and compliance with safety standards to make it a place that encourages innovation throughout the industry. Mobility Night is not just planned by the organizers, but also welcome speaker proposals and topic suggestions from participants. We aim to create a community where discussions and co-hosting opportunities can be easily facilitated through Discord, making it accessible and engaging for everyone. https://discord.com/invite/nn7QW5pn8B Summary Mobility Night #1 clarified the core technological challenges of mobility services by focusing on GPS and location-based technologies, highlighting the potential for new value creation through overcoming these challenges. A diverse range of approaches intermingled, including efforts to transform static maps into a dynamic information infrastructure, enhance environment-dependent GPS accuracy through advanced methods, and develop data-driven strategies for on-demand transportation planning. These insights will be combined with future Mobility Night themes such as IoT, data analysis, product design, and quality assurance, further accelerating progress across the industry. Stay tuned to Mobility Night, and let's learn, interact, and create new value together!
アバター
QAグループの中西です。(技術広報もKINTO FACTORY開発も色々兼務でやっています^^) 今年、私たちKINTOテクノロジーズは「AIファースト」「リリースファースト」を掲げ、QAグループとしてもAIを活用してリリース速度を向上させる取り組みを進めています。 今回は、そんな思いを持つQAメンバーが集まり、 「こんなことができると良いな」 「こんなことをやってみたい!」 という気持ちをワイワイと語り合うブレインストーミングを行いました。 この記事では、そこで出てきた魅力的なアイデアを紹介し、AIとQAがどのような可能性を持っているのかを皆さんと共有したいと思います。 話し合いで見えた課題 QA業務には日々多くの資料が生成されますが、情報量が多すぎるあまり必要な情報を探すのが大変になっています。また、仕様書もプロジェクトや担当者ごとにフォーマットが異なり、共有が難しくなっています。さらに、インシデントに対する振り返りや予防策の仕組みが整っていないため、問題の再発防止にも苦労しています。レビュー作業についても人的負担が非常に高く、効率化が求められています。 AIで実現できる未来 情報の効率的活用(RAG) RAG(Retrieval-Augmented Generation)は、最新の生成AI技術を活用した情報検索・分析の手法です。過去に蓄積された膨大な資料やインシデントのデータから関連性の高い情報を素早く抽出し、適切な情報をユーザーに提示します。例えば、あるインシデントが起きた際、AIが過去の類似事例を瞬時に検索・分析し、即座に役立つ解決策を提示します。まるで、過去の経験を全て記憶している優秀な秘書が、必要なときに瞬時にアドバイスをくれるような活躍を期待できます。すでに金融業界やカスタマーサポート業界などで導入され、顧客対応や問題解決の速度を劇的に向上させています。 仕様・設計の整理と支援 仕様書内の矛盾や抜け漏れをAIが見つけ出し、問題点を明確に整理して提示します。さらに、具体的な仕様書を入力するとAIがその内容を解析し、適切なテストシナリオを自動的に生成します。例えば、ECサイトのカート機能の仕様書を入力すると、AIが「商品追加→数量変更→決済→注文確認」といったシナリオを瞬時に作成。さらに、エラー発生時の対応シナリオや境界値テストシナリオなども自動生成可能です。これにより、人手によるシナリオ作成にかかる時間や手間を大幅に削減でき、QA業務の精度と効率を飛躍的に向上させます。 QAプロセスの自動化と効率化 ユーザー操作ログをAIが分析し、人が見落としてしまいそうな細かなミスも未然に防ぎます。具体的には、ユーザーが頻繁に発生させるエラーや異常操作をAIが抽出し、「特定の画面遷移で起こるエラー」や「入力フォームで頻繁に誤入力される項目」を特定します。これにより、テストシナリオの改善や、潜在的な問題箇所への事前対応が可能になります。また、コンフルでの煩雑なテストケース連番管理をAIが自動化し、手作業による管理ミスや作業時間の大幅な削減を実現します。 インシデント分析と予防 過去に発生したインシデントをAIが分析し、具体的な再発防止策を提示します。例えば、過去にECサイトで起きた「特定ブラウザでの表示崩れ」や「支払い機能の障害」などのインシデントをAIが徹底的に分析。その結果、「特定ブラウザのバージョンごとの挙動確認を定期的に行う」や「決済処理前後のエラーハンドリング強化」といった具体的なアクションを提案します。また、緊急性の高いインシデントが発生した際にはAIが即座にリスクレベルを判定し、迅速な対応が取れるように関係者へ自動通知を行うなど、リアルタイムでの問題解決支援を実現します。 テストデータ作成の効率化 テストに必要なデータをAIが瞬時に生成します。具体的には、新車や中古車のデータ、ユーザー情報など、リアルな業務シナリオに必要な多様なデータを迅速かつ大量に作成可能です。また、SeleniumやAppiumといったブラウザ操作ツールとAIを連携させることで、手動で行っていたブラウザ操作を自動化し、簡単な設定だけで大量のテストデータを短時間で作成できます。この連携により、人的ミスを防ぎつつ、テストデータ作成にかかる工数を劇的に短縮できるため、QAプロセス全体の効率化が実現されます。 ツール連携とプロセス自動化 JIRAやAsanaなどのツールとSlackを連携し、必要な情報をタイムリーに自動通知。多くのツールからの情報を一元化して効率よく管理します。多彩なツールの橋渡し役をAIが担い、プロセスの流れをスムーズにします。 QA専用AIモデルの活用 QA業務専用にチューニングされたChatGPTの活用を検討しています。自社専用のAIモデルを構築することで、AIの応答速度を劇的に向上。さらに、開発者自身がAIを活用して手軽にセルフチェックできる環境づくりを推進します。 今後のアクションプラン AIを活用した情報の整理・統合を最優先で開始 レビュー作業をAI支援で効率化し、人的負荷を軽減 専用AIモデルの開発に向けて継続的なデータ収集を実施 AIによるインシデント分析をプロセス改善に積極的に活用 各種ツール間の連携を自動化し、さらなる業務効率化を実現 一人では実現できないことも、みんなで考えることで新しい道が開けます。今回のブレインストーミングで出たアイデアをきっかけに様々なAIを活用したQA活動を行っていきます。AIと共に歩むQAの新たな可能性を一緒に探ったり私たちと一緒に新しい取り組みをしたいQAエンジニアの皆様、カジュアル面談なども受け付けていますのでお気軽にご連絡お待ちしております。
アバター
This article the entry for day 9 in the 2024 KINTO Technologies Advent Calendar 🎅🎄 KINTO Technologies is an engineer-driven organization with a team of over 300 members. In an increasingly complex business environment, we continuously seek to balance organizational efficiency and creativity. With dispersed locations and increase in remote work, cross-departmental communication has become more limited, making it essential to address this challenge seriously. This article, written by Nakanishi from the Development Relations Group, discusses the first step in revitalizing the organization by using Slack, our primary tool for daily communication and how we're building a talent search system based on Slack. Talent search is a system that allows you to search for data on employees' skills. Why Is Talent Search Needed? While working efficiently is important, innovation often comes from unexpected encounters, and even seemingly mundane conversations can play a crucial role In our organization, the intense focus on daily tasks we often tend to overlook “tacit knowledge” and “latent potential”. For example, even if you think, "Is there anyone with this skill?", you don't know who to consult. As our organization grows, this information asymmetry has become a serious challenge. To tackle this challenge directly, we decided to strategically utilize Slack profiles and take a proactive approach. Benefits of Utilizing Slack Profiles From the Perspective of the Developer Relations (DevRel) Group Discovering previously unnoticed talent There are many individuals whose talents and potential remain unnoticed within the organization. Although members of the Technical PR Group communicate with employees daily, it is still challenging to fully understand everyone. Slack profiles serve as a new tool to make these "hidden talents" visible. Accelerating project support We hope that being able to quickly identify people with the right skills will dramatically improve the speed of project launch and problem solving Until now, finding someone with a specific skill often relied on word-of-mouth searches within the company. However, establishing a network that enables direct communication without relying on hub organizations like us is essential for the organization's future growth. Promoting cross-departmental collaboration Until now, the Technical PR Group has organized various initiatives, such as in-house study sessions, exchange events, and study sessions inviting external lecturers. As a result, natural communication has emerged within the company, creating a chain reaction where initial connections lead to an expanding network, new collaborations are established every day. This Slack initiative will surely spur this trend. Benefits for All Employees Expanding opportunities for career growth Clearly expressing your skills and interests opens up new possibilities that you may not have noticed before. By connecting with keywords that you may not have thought of yourself, various opportunities emerge at a grassroots level. This goes beyond just work, creating opportunities for individuals with shared challenges to learn from each other and grow together. Quick access to skilled colleagues Specifically, when a new employee is looking for a frontend engineer with expertise in Next.js, they can quickly and easily search in Slack, which makes learning and problem-solving much more efficient. An internal talent database will be built and made searchable as an extension of searching messages on Slack. Revitalizing natural internal interactions There are also grassroots activities for various hobbies within the company. Whether or not they can be called as hobbies, there are interest-based channels ranging from back health to sharing easy-to-make recipes. There are also channels for various sports, games, and, of course, new technologies. As individuals become more easily connected to these channels, non-work-related interactions grow, fostering smoother collaboration in urgent work situations through pre-existing relationships. A System to Support Profile Creation For those who feel unsure about what to write, the Technical PR Group is actively providing support. This approach is more than just gathering information, it is a thoughtful dialogue process designed to unlock each employee's potential. Since the launch of our Tech Blog, we have conducted interviews to highlight employees' talents, provided support in writing articles and preparing presentation materials, and organized events and study sessions. If you are reading this article and have not yet complete your Slack profile, or if you are unsure what to include, please reach out. Let's work together to discover your strengths and share them within the organization! Support includes: Uncovering experience and interest through individual interviews One-on-one conversations help discover potential strengths that even the individual may not be aware of. Templates for creating profiles Even if you are not good at expressing yourself, you can fill it out with confidence. Assistance in verbalization for those who find self-expression difficult Specialized staff offer close support to help individuals appropriately express their strengths and interests. Template: Search results Future Prospects Currently, talent search is conducted manually, but in the future, we aim to develop a skill matching system using AI. By effectively utilizing accumulated data, we envision a more efficient and strategic approach to human resource management. Looking ahead, this will enable a deeper understanding of each employee's potential and link them with the best opportunities. Conclusion The Slack profile is more than just a self-introduction. It is a strategic tool that unlocks an organization's hidden potential by connecting people and is the key to unleashing individual possibilities. By actively sharing your interests, skills, and potential, the possibilities of the entire organization can be expanded. We believe that this small step will eventually lead to a significant transformation.
アバター
This article is the entry for day 8 in the KINTO Technologies Advent Calendar 2024 🎅🎄 Introduction Hello. My name is Nakamoto, and I work in the Mobile App Development Group. I usually work from Osaka and collaborate with team members in Tokyo on iOS development for the KINTO Unlimited app . This article deep dives into the process of enhancing the architecture of the KINTO Unlimited iOS. The app's architecture evolved gradually, moving through three different phases, and ultimately transitioning to its current structure. I will adress its design aspect and the challenges encountered at each stage below. The 1st Generation Architecture Adopted the VIPER architechture Implemented all screens using UIKit + xib/storyboard Used Combine to update views Chose an architecture with a proven track record within the company due to the short timeline for the first release Design of the 1st Gen flowchart TD id1(ViewController) -- publish --> id2(ViewModel) -- subscribe --> id1 id2 -- request --> id3(Interactor) id1 -- apply view property --> id4(UIView) id1 -- transition --> id5(Router) ViewController Notify ViewModel of an event Subscribe to outputs triggered by events from the ViewModel. Update the View based on the subscription results and invoke the Router to handle screen transitions. ViewModel Use Combine to update the state reactively. Transform an event Publisher to output the View state through a Publisher. Interactor Perform requests to API and internal DB Router Perform transitions to other screens UIView Layout using code/xib/storyboard Issue with the 1st Gen Layouts built with UIKit come with high development costs and are challenging to modify, particularly when xib/storyboard is used. Transitioning to SwiftUI would significantly enhance the process! The 2nd Generation Architecture Transitioning from UIKit to SwiftUI Replaced UIKit layouts with SwiftUI to enhance development efficiency. Integrated a SwiftUI View into a ViewController using UIHostingController. UIPerform screen transitions using UIKit as usual. At that time, SwiftUI's screen transition API was unstable, so we decided to continue using UIKit. Focus on switching to SwiftUI Making too many changes at once could raise concerns about potential degradation of functional specifications. Design of the 2nd Gen flowchart TD id1(ViewController) -- input --> id2(ViewModel) -- output --> id1 id2 -- request --> id6(Interactor) id1 -- mapping --> id3(ScreenModel) -- publish --> id1 id3 -- publish --> id4(View) -- publish --> id3 id1 -- transit --> id5(Router) ViewController Implement HostingControllerInjectable protocol and add SwiftUI View. Subscribe to the ViewModel's output and update the ScreenModel (ObservableObject) accordingly. Subscribe to the ViewModel output and the ScreenModel Publisher, then utilize the Router to handle screen transitions. ScreenModel An ObservableObject that manages the state of the View. ViewModel/ Interactor/ Router Same features as the 1st Generation Issue with the 2nd Gen State management is split between the ViewModel and ScreenModel, leading to fragmented logic and increased development and maintenance costs. Issues from the 1st generation Using Combine for reactive state changes raises concerns about maintainability and, due to the extensive codebase, can result in reduced readability. Having a single ViewModel for each screen can result in it becoming excessively large on multi-functional screens. Therefore, transitioning away from Combine and ViewModel would be a highly beneficial improvement! The 3rd Generation Architecture Switched from a Combine-driven ViewModel to a ViewStore-based architecture that centralizes state management Implemented a structure that directly updates the ObservableObject with event results, eliminating the need for AnyPublisher. Utilized async/await to achieve reactive state changes without relying on Combine. State management logic can be modularized by dividing it into functions. Design of the 3rd Gen flowchart TD subgraph ViewStore id1(ActionHandler) -- update --> id2(State) end id2 -- bind --> id5(View) -- publish action --> id1 id1 -- publish routing --> id3(ViewController) -- publish action --> id1 id3 -- transit --> id4(Router) id1 -- request --> id6(Interactor) ViewStore State An ObservableObject that manages the state of the View and is used within a SwiftUI View. Action Use an enum to replicate the functionality of the INPUT in the transform method of a traditional ViewModel. ActionHandler A handler that accepts an Action as an argument and updates the State accordingly. Implement using async/await ViewController Subscribe to routerSubject and utilize the Router to handle screen transitions. Interactor / Router Same as 2nd Generation Splitting ActionHandler On multi-functional screens, separating the ActionHandler and State can significantly improve code readability and maintainability. Binding the actionPublisher of one State to another State allows actions to be propagated from one View to another. flowchart TD subgraph ViewStore id2 -- action --> id1 id1(ActionHandler1) -- update --> id2(State1) id5 -- action --> id4 id4(ActionHandler2) -- update --> id5(State2) id8 -- action --> id7 id7(ActionHandler3) -- update --> id8(State3) end subgraph Parent View id3 id6 id9 end id2 -- bind --> id3(View1) id5 -- bind --> id6(View2) id8 -- bind --> id9(View3) Conclusion We have been pursuing this initiative for over a year, alongside ongoing feature development. Now, nearly all of the source code has been transitioned to the 3rd Generation Architecture. As a result, the code has become more readable and maintainable, paving the way for smoother future development. We are excited to continue making improvements!
アバター
2025年2月20日に初開催された「Appium Meetup Tokyo」が、Autify社東京オフィスにて開催されました。当日は約10名が現地参加し、オンラインでも多くの参加者が集まりました。 オープニングとアイスブレイク イベントはAutify社の tsueeemura さんのユーモア溢れる軽快なトークでスタートしました。オンライン接続の確認も兼ねて、「今日の夕飯何にしますか?」という質問があり、「今日はお鍋です」という回答が場の雰囲気を和ませました。 Autify社「Appiumプラグインの活用事例」 最初の登壇はAutify社のモバイル製品開発担当 rerorero さん。 Autify社が提供する「Autify NoCode Mobile」は、コードを書かずにモバイルアプリのテスト自動化が簡単に実現できるクラウドサービスです。プログラミングの専門知識がなくても直感的なインターフェースでテストを記録し、自動再実行が可能です。これにより、開発者や非エンジニアでも迅速にテスト環境を整備できる点が最大の利点です。さらに、クラウド上で実機やシミュレーターが利用でき、自社での機材調達が不要となり、設備投資を大幅に削減できることも特徴の一つです。 ただし、大量のUI要素が存在する画面では動作が著しく遅くなる課題があり、通常数秒で完了するはずのタップ操作が最大40秒もかかる問題がありました。 reroreroさんはこの問題を解決するため、Facebookが開発した「 IDB(iOS Development Bridge) 」を導入しました。IDBはCLIベースで高速にiOSのシミュレーターや実機を操作するオープンソースツールで、Core Simulator Serviceに直接イベントを送ることで反応速度を劇的に改善します。Appiumのプラグインとして統合し、サーバー間の複雑なネットワーク設定なしで直接利用可能な仕組みを構築した結果として、40秒の操作が40ミリ秒に短縮され、約1,000倍のパフォーマンス向上を実現したそうです。 登壇の詳細ポイント Appiumプラグインの導入方法とJavaScriptを使った実装例 IDBが高速なタップ操作を可能にする技術的仕組み(コアシミュレーターサービスへのイベント送信) パフォーマンス改善の実際のデモンストレーション プラグインは以下のような書き方で実装するそうです KINTOテクノロジーズ社「効率的なアプリ自動化のためのガイドラインと実践方法」 続いて、KINTOテクノロジーズの岡さんとパンヌさんが、自動化テスト環境の構築方法と成果を発表しました。 弊社では、多様な端末やOSの組み合わせによる手動テストの負荷が増大していました。そこで、開発初期段階からQAチームと開発チームが協力し、統一されたテスト専用IDを設定する方法を導入しました。これにより、レイアウト変更に伴うXPATHの修正負荷を軽減し、テストの安定性が大幅に向上しました。 また、テスト結果はSlackでリアルタイム通知され、詳細なログや動画をBOXで管理する仕組みを構築しました。これにより、関係者全員が容易にテスト状況を確認できる環境を実現しています。 登壇の詳細ポイント 開発プロセスにおける自動テスト意識の統合 テスト専用IDの設定前後でのメンテナンス負荷の比較 SlackおよびBOXを活用したテスト結果の効率的管理方法 Github Copilotを活用したコーディングの効率化 登壇資料 https://speakerdeck.com/kintotechdev/xiao-lu-de-naapurizi-dong-hua-notamenogaidoraintoshi-jian-fang-fa 📌 参加者アンケートから見るE2Eテストの現状と関心トピック 今回、Meetupにご参加いただいた皆さんを対象にアンケートを実施しました。その結果から得られた興味深い傾向を紹介します。 ① 参加者の職種割合 参加者の半数以上(54.1%)はQAエンジニアでしたが、SET/SDET、Webやモバイルアプリケーションエンジニアなど多様な職種の方にもお越しいただきました。 ② Appiumの利用経験 Appiumに関しては、半数以上(55.7%)が「使ったことがない」と回答しており、新規ユーザーや導入を検討中の方が多いことが分かります。一方で一定の経験(1年以上)を持つ方もおり、利用の成熟度には幅がありました。 ③ E2Eテストの実務経験 E2Eテスト全般では、「1〜3年(27.9%)」や「5年以上(24.6%)」といった比較的経験豊富な層が半数を超えており、実務での活用が広がっている状況が確認できました。 ④ 最も関心のあるトピック 参加者が特に興味を持ったトピックとしては以下のようなものが挙げられました。 Appiumによるテスト導入の成果や事例 Appiumの活用シナリオや注意点、実際の苦労話 CI/CDへの組み込みや、クロスプラットフォーム(React Native、Flutterなど)への対応状況 今回のアンケート結果を踏まえて、今後も皆さまの関心やニーズに沿った情報をお届けしていきたいと思います。 ネットワーキングと今後の展望 セッション後のネットワーキングではピザを囲んで積極的な交流が行われ、新しいアイデアや協力関係が生まれました。Appium Meetup Tokyoは今後も定期的に開催予定で、登壇者や運営メンバーを募集しています。ぜひ次回もご参加ください。 参加を検討している方へ モバイルアプリの自動テストをこれから本格的に導入したい方 Appiumに興味があるが具体的な事例やノウハウが欲しい方 CI/CDと組み合わせた運用に関心があるエンジニアやQA担当の方 他社事例を参考に自社のテスト文化を改善したい方 上記に当てはまる方は、ぜひAppium Meetup Tokyoで最新の知見を共有し合いましょう。今後の告知や詳細情報は @AutifyJapan や @KintoTech_Dev でさせて頂きます。ご質問やご要望などがございましたら、お気軽にお寄せください。 次回の「Appium Meetup Tokyo #2」でお会いできることを心より楽しみにしています。 アーカイブ配信 https://www.youtube.com/watch?v=zV4WbClGquE
アバター