Sylvain Filoni
fffiloni
AI & ML interests
ML for Animation β’ Alumni Arts DΓ©co Paris
Articles
Organizations
Posts
7
Post
"The principle of explainability of ai and its application in organizations"
Louis Vuarin, VΓ©ronique Steyer
ββΊ π https://doi.org/10.3917/res.240.0179
ABSTRACT: The explainability of Artificial Intelligence (AI) is cited in the literature as a pillar of AI ethics, yet few studies explore its organizational reality. This study proposes to remedy this shortcoming, based on interviews with actors in charge of designing and implementing AI in 17 organizations. Our results highlight: the massive substitution of explainability by the emphasis on performance indicators; the substitution of the requirement of understanding by a requirement of accountability; and the ambiguous place of industry experts within design processes, where they are employed to validate the apparent coherence of βblack-boxβ algorithms rather than to open and understand them. In organizational practice, explainability thus appears sufficiently undefined to reconcile contradictory injunctions. Comparing prescriptions in the literature and practices in the field, we discuss the risk of crystallizing these organizational issues via the standardization of management tools used as part of (or instead of) AI explainability.
Vuarin, Louis, et VΓ©ronique Steyer. Β« Le principe dβexplicabilitΓ© de lβIA et son application dans les organisations Β», RΓ©seaux, vol. 240, no. 4, 2023, pp. 179-210.
#ArtificialIntelligence #AIEthics #Explainability #Accountability
Louis Vuarin, VΓ©ronique Steyer
ββΊ π https://doi.org/10.3917/res.240.0179
ABSTRACT: The explainability of Artificial Intelligence (AI) is cited in the literature as a pillar of AI ethics, yet few studies explore its organizational reality. This study proposes to remedy this shortcoming, based on interviews with actors in charge of designing and implementing AI in 17 organizations. Our results highlight: the massive substitution of explainability by the emphasis on performance indicators; the substitution of the requirement of understanding by a requirement of accountability; and the ambiguous place of industry experts within design processes, where they are employed to validate the apparent coherence of βblack-boxβ algorithms rather than to open and understand them. In organizational practice, explainability thus appears sufficiently undefined to reconcile contradictory injunctions. Comparing prescriptions in the literature and practices in the field, we discuss the risk of crystallizing these organizational issues via the standardization of management tools used as part of (or instead of) AI explainability.
Vuarin, Louis, et VΓ©ronique Steyer. Β« Le principe dβexplicabilitΓ© de lβIA et son application dans les organisations Β», RΓ©seaux, vol. 240, no. 4, 2023, pp. 179-210.
#ArtificialIntelligence #AIEthics #Explainability #Accountability
Post
I'm happy to announce that β¨ Image to Music v2 β¨ is ready for you to try and i hope you'll like it too ! π
This new version has been crafted with transparency in mind,
so you can understand the process of translating an image to a musical equivalent.
How does it works under the hood ? π€
First, we get a very literal caption from microsoft/kosmos-2-patch14-224; this caption is then given to a LLM Agent (currently HuggingFaceH4/zephyr-7b-beta )which task is to translate the image caption to a musical and inspirational prompt for the next step.
Once we got a nice musical text from the LLM, we can send it to the text-to-music model of your choice:
MAGNet, MusicGen, AudioLDM-2, Riffusion or Mustango
Instead of the previous version of Image to Music which used Mubert API, and could output curious and obscure combinations, we only provide open sourced models available on the hub, called via the gradio API.
Also i guess the music result should be more accurate to the atmosphere of the image input, thanks to the LLM Agent step.
Pro tip, you can adjust the inspirational prompt to match your expectations, according to the chosen model and specific behavior of each one π
Try it, explore different models and tell me which one is your favorite π€
ββΊ fffiloni/image-to-music-v2
This new version has been crafted with transparency in mind,
so you can understand the process of translating an image to a musical equivalent.
How does it works under the hood ? π€
First, we get a very literal caption from microsoft/kosmos-2-patch14-224; this caption is then given to a LLM Agent (currently HuggingFaceH4/zephyr-7b-beta )which task is to translate the image caption to a musical and inspirational prompt for the next step.
Once we got a nice musical text from the LLM, we can send it to the text-to-music model of your choice:
MAGNet, MusicGen, AudioLDM-2, Riffusion or Mustango
Instead of the previous version of Image to Music which used Mubert API, and could output curious and obscure combinations, we only provide open sourced models available on the hub, called via the gradio API.
Also i guess the music result should be more accurate to the atmosphere of the image input, thanks to the LLM Agent step.
Pro tip, you can adjust the inspirational prompt to match your expectations, according to the chosen model and specific behavior of each one π
Try it, explore different models and tell me which one is your favorite π€
ββΊ fffiloni/image-to-music-v2
Collections
13
spaces
115
pinned
Running
151
π
Image2SFX Comparison
Generates audio environment from an image
pinned
Running
on
CPU Upgrade
130
πποΈ
Video SoundFX
Generates a sound effect that matches video shot
pinned
Running
on
Zero
30
ποΈπΊ
Video to Music
Generate and apply matching music background to video shot
pinned
Running
on
Zero
345
πΊ
Image to Music v2
Get a music sample inspired by the mood of an image
pinned
Running
on
Zero
55
π€
LLM Agent from an Image
Get a LLM Assistant personality idea from an image
Running
on
Zero
41
π
ZeST
Zero-Shot Material Transfer from a Single Image