Affiliation
#90 opened about 8 hours ago
by
NatalieCheong
How should names & fewshot examples be encoded?
#89 opened about 20 hours ago
by
theobjectivedad
Please provide some guidance or documentation on tool usage.
#88 opened 2 days ago
by
lcahill
AttributeError: type object 'AttentionMaskConverter' has no attribute '_ignore_causal_mask_sdpa'
#87 opened 3 days ago
by
gy19
Your request to access this repo has been successfully submitted, and is pending a review from the repo's authors.
#86 opened 3 days ago
by
g3stz
Model Generating Prefix
#85 opened 4 days ago
by
dongyulin
cant get access
#84 opened 4 days ago
by
ERmak158
Meta-Llama-3-8B-Instruct does not appear to have a file named config.json
6
#82 opened 5 days ago
by
RK-RK
Access issues via Meta website and HF
#81 opened 5 days ago
by
GiorgioDiSalvo
Plans for a 13B model?
#80 opened 6 days ago
by
Cheeto96
not getting the right output
2
#79 opened 6 days ago
by
Sardarmoazzam
what memory needed ?
#78 opened 6 days ago
by
francescofiamingo
Arrrr, ye landlubbers!
#77 opened 6 days ago
by
billphoo
Key Error 'llma' in configuration_auto.py
#76 opened 7 days ago
by
himasrikode
Update README.md
#75 opened 7 days ago
by
DollarAkshay
Your request to access this repo has been rejected by the repo's authors.
11
#74 opened 8 days ago
by
Hoo1196
Changing "eos_token" to eot-id fix the issue of overflow of model response - at least using the Messages API.
1
#73 opened 10 days ago
by
myonlyeye
How to Fine-tune Llama-3 8B Instruct.
4
#72 opened 11 days ago
by
elysiia
Update config.json
#71 opened 11 days ago
by
ArthurZ
The request to access the repo has been sent for several days, why hasn't it passed yet?
7
#70 opened 11 days ago
by
water-cui
AttributeError: type object 'AttentionMaskConverter' has no attribute '_ignore_causal_mask_sdpa' [ ]:
2
#69 opened 11 days ago
by
tianke0711
Access given still not working.
7
#68 opened 11 days ago
by
adityar23
Uncaught (in promise) SyntaxError: Unexpected token 'E', "Expected r"... is not valid JSON
3
#66 opened 11 days ago
by
sxmss1
Update README.md
#65 opened 12 days ago
by
AdnanRiaz107
Llama responses are broken during conversation
1
#64 opened 12 days ago
by
gusakovskyi
for using model
#63 opened 12 days ago
by
yogeshm
Update tokenizer_config.json
8
#60 opened 12 days ago
by
Navanit-shorthills
Access Denied
#59 opened 12 days ago
by
Jerry-hyl
Not outputting <|eot_id|> on SageMaker
#58 opened 12 days ago
by
zhengsj
Update README.md
#57 opened 12 days ago
by
inuwamobarak
Batched inference on multi-GPUs
1
#56 opened 12 days ago
by
d-i-o-n
Badly Encoded Tokens/Mojibake
1
#55 opened 13 days ago
by
muchanem
How to use EOT_ID
3
#54 opened 13 days ago
by
saksham-lamini
Denied permission to DL
3
#51 opened 14 days ago
by
TimPine
request to access is still pending a review
26
#50 opened 14 days ago
by
Hoo1196
mlx_lm.server gives wonky answers
#49 opened 14 days ago
by
conleysa
Tokenizer mismatch all the time
2
#47 opened 15 days ago
by
tian9
Could anyone can tell me how to set the prompt template when i use the model in the pycharm by transformers?
1
#46 opened 15 days ago
by
LAKSERS
meta-llama/Meta-Llama-3-8B-Instruct
1
#45 opened 15 days ago
by
interpio
Instruct format?
3
#44 opened 16 days ago
by
m-conrad-202
Warning: The attention mask and the pad token id were not set..
2
#40 opened 16 days ago
by
Stephen-smj
MPS support quantification
#39 opened 16 days ago
by
tonimelisma
`meta-llama/Meta-Llama-3-8B-Instruct` model with sagemaker
1
#38 opened 16 days ago
by
aak7912
Problem with the tokenizer
2
#37 opened 16 days ago
by
Douedos
how to output an answer without side chatter
7
#36 opened 16 days ago
by
Gerald001
ValueError: You can't train a model that has been loaded in 8-bit precision on a different device than the one you're training on.
9
#35 opened 17 days ago
by
madhurjindal
Does instruct need add_generation_prompt?
#33 opened 17 days ago
by
bdambrosio
Error while downloading the model
#32 opened 17 days ago
by
amarnadh1998
Garbage responses
2
#30 opened 17 days ago
by
RainmakerP